Difference between revisions of "Heuristic Evaluation"

From Clinfowiki
Jump to: navigation, search
(Severity Ratings)
 
(8 intermediate revisions by one user not shown)
Line 2: Line 2:
 
== Introduction ==
 
== Introduction ==
  
'''Heuristic Evaluation''' is an informal user-interface [[usability]] testing method using a set of pre-determined heuristics, or ‘rules of thumb,' and is a widely-used method in [[Discount Usability Engineering]], developed by user experience consultant Jakob Nielsen, which emphasizes Nielsen’s finding that the majority of usability problems can be discovered by a small number of people (typically three to five). Nielsen developed a set of ten usability standards, or heuristics, which are now widely used in usability testing. Nielsen also developed an efficient rating system which can be used to analyze and prioritize findings from the usability evaluations <ref name="first"> Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, NY </ref><ref name="second"> Nielsen, Jakob, and Rolf Molich. "Heuristic evaluation of user interfaces."Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 1990. </ref> <br /> <br />  
+
'''Heuristic Evaluation''' is an informal user-interface [[usability]] testing method based on a set of predetermined usability heuristics, or 'rules of thumb,' which guide the evaluation. This technique is widely-used in the [[Discount usability engineering]] method developed by user experience consultant Jakob Nielsen. Discount usability engineering is based on on Nielsen’s finding that the majority of usability problems can be discovered by a small number of evaluators (typically three to five). <ref name="second"> Nielsen, Jakob, and Rolf Molich. "Heuristic evaluation of user interfaces."Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 1990. </ref> <br />
 +
 
 +
In addition to the ten usability heuristics, Nielsen also developed an efficient problem severity rating system, which can be used to analyze and prioritize findings from the usability evaluations based on problem frequency, impact, and persistence. <ref name="first"> Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, NY </ref><br /> <br />
  
 
== Nielsen's Ten Usability Heuristics ==
 
== Nielsen's Ten Usability Heuristics ==
  
 +
The following are Nielsen's descriptions of the ten usability heuristics.<ref name="third"> Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33, 3 (March), 338-348 </ref> <br />
 
*'''Visibility of system status'''—The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. <br />  
 
*'''Visibility of system status'''—The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. <br />  
 
* '''Match between system and the real world'''— The system should speak the user’s language, with words, phrases, and concepts familiar to the user rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. <br />  
 
* '''Match between system and the real world'''— The system should speak the user’s language, with words, phrases, and concepts familiar to the user rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. <br />  
Line 15: Line 18:
 
* '''Aesthetic and minimalist design'''—Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. <br />  
 
* '''Aesthetic and minimalist design'''—Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. <br />  
 
* '''Help users recognize, diagnose, and recover from errors'''—Error messages should be expressed in plain language (no codes), precisely indicate the problems, and constructively suggest a solution. <br />  
 
* '''Help users recognize, diagnose, and recover from errors'''—Error messages should be expressed in plain language (no codes), precisely indicate the problems, and constructively suggest a solution. <br />  
* '''Help and documentation'''—Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information would be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.<ref name="third"> Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33, 3 (March), 338-348 </ref> <br />  
+
* '''Help and documentation'''—Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information would be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.  <br />
== Prioritization of Identified Problems ==
+
  
The evaluators' observations are then synthesized and rated by severity of the problem.<ref name="fourth"> Nielsen, Jakob. "Severity ratings for usability problems." Papers and Essays(1995): 54. </ref> <br /> <br />  
+
== Severity Ratings ==
 +
 
 +
After testing, the evaluators' observations can then be synthesized and rated by severity of the problem. Ranking the severity of the problems based on specific criteria helps prioritize which usability problems should be allocated the most resources.<ref name="fourth"> Nielsen, Jakob. "Severity ratings for usability problems." Papers and Essays(1995): 54. </ref> <br /> <br />  
 
   
 
   
Severity ratings can be used to systematically evaluate and prioritize problems, providing a better idea of not only which problems are of highest priority but also system readiness. <br  /> Severity ratings are based on the following criteria: <br />  
+
Severity ratings can be used to systematically evaluate and prioritize problems, providing a better idea of not only which problems are of highest priority but also system readiness. <br  /> <br />Severity ratings are based on the following criteria: <br />  
*The frequency with which the problem occurs: Is it common or rare?<br />  
+
*'''Frequency''': how often does the problem occur: Is it common or rare?<br />  
*The impact of the problem if it occurs: Will it be easy or difficult for the users to overcome?<br />  
+
*'''Impact''': How difficult will it be for the users?<br />  
*The persistence of the problem: Is it a one-time problem that users can overcome once they know about it or will users repeatedly be bothered by the problem?<br /> <br />  
+
*'''Persistence''': Will users repeatedly encounter the problem?<br />  
Severity can also be ranked using a single severity rating as a general assessment of each usability problem. <br />  
+
Severity can also be ranked more generally using a single severity rating for each usability problem. <br /><br />  
 
The following 0 to 4 rating scale can be used to rate the severity of usability problems:<br />  
 
The following 0 to 4 rating scale can be used to rate the severity of usability problems:<br />  
 
0 = I don't agree that this is a usability problem at all <br />  
 
0 = I don't agree that this is a usability problem at all <br />  
Line 30: Line 34:
 
2 = Minor usability problem: fixing this should be given low priority <br />  
 
2 = Minor usability problem: fixing this should be given low priority <br />  
 
3 = Major usability problem: important to fix, so should be given high priority <br />  
 
3 = Major usability problem: important to fix, so should be given high priority <br />  
4 = Usability catastrophe: imperative to fix this before product can be released<br /><br />  
+
4 = Usability catastrophe: imperative to fix this before product can be released<br /><br />
 
+
  
 
== References ==
 
== References ==

Latest revision as of 07:02, 2 May 2015

Introduction

Heuristic Evaluation is an informal user-interface usability testing method based on a set of predetermined usability heuristics, or 'rules of thumb,' which guide the evaluation. This technique is widely-used in the Discount usability engineering method developed by user experience consultant Jakob Nielsen. Discount usability engineering is based on on Nielsen’s finding that the majority of usability problems can be discovered by a small number of evaluators (typically three to five). [1]

In addition to the ten usability heuristics, Nielsen also developed an efficient problem severity rating system, which can be used to analyze and prioritize findings from the usability evaluations based on problem frequency, impact, and persistence. [2]

Nielsen's Ten Usability Heuristics

The following are Nielsen's descriptions of the ten usability heuristics.[3]

  • Visibility of system status—The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  • Match between system and the real world— The system should speak the user’s language, with words, phrases, and concepts familiar to the user rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  • User control and freedom—Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  • Consistency and standards—Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
  • Error prevention—Even better than good error messages is a careful design that prevents problems from occurring in the first place.
  • Recognition rather than recall—Make objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  • Flexibility and efficiency of use—Accelerators, unseen by the novice user, may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
  • Aesthetic and minimalist design—Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  • Help users recognize, diagnose, and recover from errors—Error messages should be expressed in plain language (no codes), precisely indicate the problems, and constructively suggest a solution.
  • Help and documentation—Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information would be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.

Severity Ratings

After testing, the evaluators' observations can then be synthesized and rated by severity of the problem. Ranking the severity of the problems based on specific criteria helps prioritize which usability problems should be allocated the most resources.[4]

Severity ratings can be used to systematically evaluate and prioritize problems, providing a better idea of not only which problems are of highest priority but also system readiness.

Severity ratings are based on the following criteria:

  • Frequency: how often does the problem occur: Is it common or rare?
  • Impact: How difficult will it be for the users?
  • Persistence: Will users repeatedly encounter the problem?

Severity can also be ranked more generally using a single severity rating for each usability problem.

The following 0 to 4 rating scale can be used to rate the severity of usability problems:
0 = I don't agree that this is a usability problem at all
1 = Cosmetic problem only: need not be fixed unless extra time is available on project
2 = Minor usability problem: fixing this should be given low priority
3 = Major usability problem: important to fix, so should be given high priority
4 = Usability catastrophe: imperative to fix this before product can be released

References

  1. Nielsen, Jakob, and Rolf Molich. "Heuristic evaluation of user interfaces."Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 1990.
  2. Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, NY
  3. Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33, 3 (March), 338-348
  4. Nielsen, Jakob. "Severity ratings for usability problems." Papers and Essays(1995): 54.

Additional Resources

http://www.nngroup.com/articles/ten-usability-heuristics/



--Watsolin (talk) 22:23, 28 April 2015 (PDT)