Heuristic Evaluation

From Clinfowiki
Revision as of 06:06, 29 April 2015 by Watsolin (Talk | contribs)

Jump to: navigation, search

Introduction

Heuristic Evaluation is an informal user-interface usability testing method using a set of pre-determined heuristics, or ‘rules of thumb,' and is a widely-used method in Discount Usability Engineering, developed by user experience consultant Jakob Nielsen, which emphasizes Nielsen’s finding that the majority of usability problems can be discovered by a small number of people (typically three to five). Nielsen developed a set of ten usability standards, or heuristics, which are now widely used in usability testing. Nielsen also developed an efficient rating system which can be used to analyze and prioritize findings from the usability evaluations [1][2]

Nielsen's Ten Usability Heuristics

  • Visibility of system status—The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  • Match between system and the real world— The system should speak the user’s language, with words, phrases, and concepts familiar to the user rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  • User control and freedom—Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  • Consistency and standards—Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
  • Error prevention—Even better than good error messages is a careful design that prevents problems from occurring in the first place.
  • Recognition rather than recall—Make objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  • Flexibility and efficiency of use—Accelerators, unseen by the novice user, may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
  • Aesthetic and minimalist design—Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  • Help users recognize, diagnose, and recover from errors—Error messages should be expressed in plain language (no codes), precisely indicate the problems, and constructively suggest a solution.
  • Help and documentation—Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information would be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.[3]

Prioritization of Identified Problems

The evaluators' observations are then synthesized and rated by severity of the problem.[4]

Severity ratings can be used to systematically evaluate and prioritize problems, providing a better idea of not only which problems are of highest priority but also system readiness.
Severity ratings are based on the following criteria:

  • The frequency with which the problem occurs: Is it common or rare?
  • The impact of the problem if it occurs: Will it be easy or difficult for the users to overcome?
  • The persistence of the problem: Is it a one-time problem that users can overcome once they know about it or will users repeatedly be bothered by the problem?

Severity can also be ranked using a single severity rating as a general assessment of each usability problem.
The following 0 to 4 rating scale can be used to rate the severity of usability problems:
0 = I don't agree that this is a usability problem at all
1 = Cosmetic problem only: need not be fixed unless extra time is available on project
2 = Minor usability problem: fixing this should be given low priority
3 = Major usability problem: important to fix, so should be given high priority
4 = Usability catastrophe: imperative to fix this before product can be released


References

  1. Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, NY
  2. Nielsen, Jakob, and Rolf Molich. "Heuristic evaluation of user interfaces."Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 1990.
  3. Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33, 3 (March), 338-348
  4. Nielsen, Jakob. "Severity ratings for usability problems." Papers and Essays(1995): 54.

Additional Resources

http://www.nngroup.com/articles/ten-usability-heuristics/



--Watsolin (talk) 22:23, 28 April 2015 (PDT)