Cognitive and usability engineering methods for the evaluation of clinical information systems

From Clinfowiki
Revision as of 05:08, 29 November 2015 by Lcorrales (Talk | contribs)

Jump to: navigation, search

This is a systematic review of the article entitled “Cognitive and usability engineering methods for the evaluation of clinical information systems” by Andre W. Kushniruk. [1]


Introduction

Healthcare policy and decision makers are requesting solid evidence to justify the need for investing in health information systems. This demand necessitates adequate evaluation of information systems. In recent times, a wide range of approaches and methodologies for evaluation of information systems have developed ranging from controlled clinical trials to use of questionnaires and interviews with users.

Usability is generally defined as “the capacity of a system to allow users to carry out their tasks safely, effectively, efficiently, and enjoyably.” Currently, usability engineering has emerged to address the need for systems evaluation and improvement.

In this paper, the authors focused on methods of evaluation that have arisen cognitive and usability engineering that can be applied during a system’s development in order to provide feedback and direction for its ultimate design.

Need for New Evaluation Methodologies for Health Systems

The authors indicated that traditional, outcome-based evaluations include quantitative assessments of the economic impact, accuracy, safety, and reliability of completed information systems. It was noted that such studies have pre-defined outcome measures which are measured after the system has been deployed in some setting. However, the problem is that if the outcome such studies are negative, then there is often no way of knowing the reason for this outcome.

In addition, the authors suggested that one of the most widely used methods for evaluating health information systems continues to be the use of questionnaires, either as the primary method of data collection in system evaluations, or alternatively as one of several types of data collected in multi-method evaluations of information systems. Questionnaires used for assessing the results of a system may not reveal how a technology under study fits into the context of actual system use. They are also limited in providing detailed information about the process of use of a system in performing complex tasks. Questionnaires contain items that are pre-determined by the investigators and consequently are of limited value in identifying new or emergent issues in the use of a system that the investigators have not previously thought of. Furthermore, by asking subjects to rate a system, using a questionnaire typically presented sometime after system’s use, the results are subject to problems of the subject’s recall of their experience in using the system.

Despite these prospective limitations, these approaches are still comprehensively used forms of data collection for gathering system requirements upon which systems are developed and also for evaluating the effects of newly introduced health information systems.

Cognitive Task Analysis in Biomedical Informatics

Cognitive task analysis (CTA) is an emerging approach to the evaluation of medical systems that represents an assimilation of work from the field of systems engineering and cognitive research in medicine. It is concerned with characterizing the decision making and reasoning skills and information processing needs of subjects as they perform activities and perform tasks involving the processing of complex information. CTA has also been applied in the design of systems in order to create a better understanding of human information needs in development of systems In health care, these tasks might consist of activities such as a physician entering data into an information system or a nurse accessing on-line guidelines to help in management of a patient.

Usability Engineering in Biomedical Informatics

In health care settings, a number of researchers have begun to apply methods adapted from usability engineering towards the design and evaluation of clinical information systems. This has included work in developing portable and low cost methods for analyzing use of health care information systems, along with a focus on developing principled qualitative and quantitative methods for analyzing usability data resulting from such study.

There are a number of specific methods associated with usability engineering and leading among these is usability testing. Usability testing refers to “the evaluation of information systems that involves testing of participants (i.e., subjects) who are representative of the target user population, as they perform representative tasks using an information technology (e.g., physicians using a CPR system to record patient data) in a particular clinical context”. During the evaluation, all user–computer interactions are typically recorded (e.g., video recordings made of all computer screens or user activities and actions). Types of evaluations using this approach can vary from formal, controlled laboratory studies of users, to less formal approaches.

Information from usability testing regarding user problems, preferences, suggestions and work practices is applied not only towards the end of system development (to ensure that systems are effective, efficient and sufficiently enjoyable to achieve acceptance), but throughout the development cycle to ensure that the development process leads to operative end products. The typical system development life cycle is characterized by the following phases, which define major activities involved in developing software: (1) project planning, (2) analysis (involving gathering of system requirements), (3) design of the system, (4) implementation, and (5) system support/maintenance.

There are a number of types of usability tests, based on when in the development life cycle they are applied: (1) Exploratory Tests—conducted early in the system development cycle to test preliminary design concepts using prototypes or storyboards. (2) Testing of prototypes used during requirements gathering. (3) Assessment Tests—conducted early or midway through the development cycle to provide iterative feedback into evolving design of prototypes or systems. (4) Validation Tests—conducted to ensure that completed software products are acceptable regarding predefined acceptance measures. (5) Comparison Tests—conducted at any stage to compare design alternatives or possible solutions.

Usability testing approaches to the evaluation of clinical information system

  • Phase 1: Identification of evaluation objectives
  • Phase 2: Sample selection and study design
  • Phase 3: Selection of representative experimental tasks and contexts.
  • Phase 4: Selection of background questionnaires
  • Phase 5: Selection of the evaluation environment
  • Phase 6: Data collection video recording and recording of thought processes
  • Phase 7: Analysis of the process data
  • Phase 8: Interpretation of findings
  • Phase 9: Iterative input into design

Heuristic Evaluation and Usability Heuristics

Heuristic evaluation is a usability inspection method in which the system is evaluated on the basis of well-tested design principles such as visibility of system status, user control and freedom, consistency and standards, flexibility, and efficiency of use. This methodology was developed by Jakob Nielsen. There are several stages to carrying out a heuristic evaluation:

  • Heuristic 1: Visibility of system status
  • Heuristic 2: Match the system to the real World.
  • Heuristic 3: User control and freedom.
  • Heuristic 4: Consistency and standards.
  • Heuristic 5: Error prevention
  • Heuristic 6: Minimize memory load—support recognition rather than recall
  • Heuristic 7: Flexibility and efficiency of use.
  • Heuristic 8: Aesthetic and minimalist design.
  • Heuristic 9: Help users recognize, diagnose and recover from errors.
  • Heuristic 10: Help and documentation.

Usability inspection approaches to the evaluation of clinical information systems

Several types of inspection methods have appeared in the literature:

  • Heuristic evaluation: involves having usability specialists judge the user interface and system functionality as to whether they conform to established principles of usability and good design. Heuristic evaluation basically involves the estimation of the usability of a system by a user interface expert who systematically examines a system or interface using a set of heuristics.
  • Guideline reviews: can be considered to be a hybrid between heuristic evaluation and standard software inspection, where the interface or system being evaluated is checked for conformance with a comprehensive set of usability guidelines.
  • Pluralistic Walkthroughs: involve conducting review meetings where users, developers and analysts step through specific scenarios together and discuss usability issues that they feel might arise
  • Consistency Inspections: refer to an evaluation of a system in terms of how consistent it is with other related designs (or other systems belonging to a similar family of products).
  • Standard Inspections: involve an expert on system standards inspecting the interface with regard to compliance with some specified usability or system standards.
  • The Cognitive Walkthrough: is a method which applies principles from the study of cognitive psychology to simulate the cognitive processes and user actions needed to carry out specific tasks using a computer system

The authors found the heuristic evaluation and the cognitive walkthrough to be most useful for adaptation in order to evaluate health information systems.

Advances in Usability Evaluations in Biomedical Informatics

In recent years a number of trends have occurred in the refinement and application of the methodological approaches described in this paper. These include advances in the following areas: (a) application and extension of the approaches to the distance analysis of the use of systems over the World Wide Web, (b) automation of some of the key components in the analysis of data, (c) extension to evaluation of mobile applications, and (d) advances in conducting evaluations in naturalistic or simulated environments.

Conclusion

In summary the authors argued that conventional methods for evaluating health information systems have limitations and that they could benefit by supplementing them with newer types of evaluation emerging from cognitive science and usability engineering. It was noted that a challenge for future work on evaluation of health information systems lies in the integration of data collected from multiple evaluation methods.

References

  1. Cognitive and usability engineering methods for the evaluation of clinical information systems. Journal of Biomedical Informatics 37 (2004) 56-76 http://dx.doi.org/10.1016/j.jbi.2004.01.003

Comments

This was a particularly extensive article that detailed crucial information about methods of evaluating health information systems. Researchers, hospitals and policy makers would find this article useful, especially in assessing the necessity and value of health information systems.

Related Articles

Complementary methods of system usability evaluation: surveys and observations during software design and development cycles