Evaluating usability of a commercial electronic health record: A case study

From Clinfowiki
Revision as of 04:11, 3 November 2015 by Jpalinkas (Talk | contribs)

Jump to: navigation, search

This is a review of a paper submitted by Paula J. Edwards, Kevin P Moloney, Julie A. Jacko, and François Sainfort. [1]

Research question:

Are there ways to measure the (usability) of a commercial (EHR) to ensure safety and to enable clinicians to focus on their patients rather than the technology?

Abstract

This case study presents findings from a (usability) evaluation of a commercial Electronic Health Record (EHR) for a large pediatric hospital system. A predictive evaluation method, Heuristic Walkthrough, was used to evaluate the (usability) of this (EHR) system. Outcomes from this evaluation resulted in immediate changes in the system configuration and training materials, which helped to avoid usability problems at rollout, as well as change requests to the vendor to improve overall system (usability) in the long term. [1]

Introduction

The federal Certification Commission for Healthcare Information Technology (CCHIT) was established to certify commercial (EHRs) to ensure they address issues related to functionality, interoperability, and security (CCHIT, 2006). However, criteria addressing (usability) is notably absent. This omission represents potential trouble for (EHR) systems, as ensuring the (usability) of these systems is crucial to their success. Improved (usability) can reduce errors, leading to improved patient safety, as well as increased efficiency, enabling clinicians to spend more time with patients. Therefore, it is important for care providers to take steps to ensure that their (EHR) will be usable by physicians, nurses, and other clinical staff. [1]


This case study highlights a pediatric hospital system that applied predictive usability evaluations, in addition to other user-centered design methods, to ensure the (usability) of their commercial (EHR) implementation. Usability inspection methods (UIMs) including, Heuristic Walkthrough (HW), a predictive evaluation method, was used to assess the (usability) of the (EHR's) functions in order to identify configuration and design changes that could improve the system’s (usability). [1]

Methods

The evaluation team consisted of four (usability) experts and various subject matter experts. The (usability) experts were outside evaluators with a background and education in (usability) theory and practice, as well as extensive prior experience with (usability) evaluation. The subject matter experts consisted of three to four evaluators of nurses and respiratory therapists with substantial clinical experiences. [1]


The HW inspection method utilizes a two-pass approach. The first pass is task-focused: evaluators use the system to complete a prioritized set of work tasks representative of those tasks users will complete. For each step in each task, evaluators were asked to consider the following four questions:

  • Will users know what they need to do next?
  • Will users notice that there is a control available that will allow them to accomplish the next part of their task?
  • Once users find the control, will they know how to use it?
  • If users perform the correct action, will they see that progress is being made toward completing the task? Does the system provide appropriate feedback? [1]


After completing the task-focused first pass, evaluators complete a ‘‘free form’’ second pass of evaluating the system. The task-based focus of the first pass inherently helps to guide the evaluation in the second pass, as the analysts inherently uses their previous knowledge and experience with the system to look for further usability issues, thus providing a deeper investigation into the system. Once each evaluator independently conducts their evaluation and documents identified issues, the issues are consolidated and reviewed by the evaluation team as a group. During this group review, issues are prioritized based on their anticipated frequency and severity and ideas for resolving identified issues are discussed. [1]

Results/Conclusion

The evaluation team using Heuristic Walkthrough evaluation criteria identifies 193 issues, of which 21 were duplicated issues identified by more than one evaluator, 23 issues were classified as false positives and 15 issues were technical bugs. Of a total of 134 potential usability issues, 20 issues were related to consistency that increased the learning curve and cognitive attention required by users, 17 issues were related to users unable to notice or appropriately use a control to complete a task or perform an action, 13 issues were related to limited flexibility and/or efficiency in completing steps or tasks, 10 issues were related to confusion over what next step the user should take and 74 issues were related to other sources such as training, vendor fix, configuration changes and 15 issues related to general system navigation and layout. [1]


There are always tradeoffs that need to be considered when using a commercial (EHR). This is due to inabilities that may not satisfy user’s work tasking and work flows. Some of your options are to develop work around or have the vender generate changes that will satisfy your given environments. Systems that are unable demonstrate error recovery can be held legally liable for corrupted information that they may generate. This case study demonstrated that HW and UIMs can be a useful tool for evaluating and improving (EHR) usability but lacked the ability to identify usability issues that arise from the interaction, sharing, and communication requirements of clinical work. [1]

Comments

Evaluating commercial (EHR) (usability) is dependent on the end users ability to perform their work tasks within their work flows. (Usability) evaluators are hindered if they lack knowledge of these specific work tasks and work flows. This case study was only able to involve two types of users: a nurse entering the order and a respiratory therapist carrying out an order. While the evaluators used observation criteria to measure usability, they could easily be easy fooled by the end users in evaluating whether an (EHR) benefited the end user’s performance or was not a critical tool for the nurses and respiratory therapist to perform their job duties. This case study lacked physicians, pharmacists, technicians and other technicians to evaluate a commercials (EHR) ability to evaluate usability of applications and functions for tracking, monitoring, and entering medical information, such as patient medical history, allergies, test/lab results, diagnoses, and medications. These usability inspection methods are good for commercial firms to market their (EHR) systems but for a hospital or clinicians office, usability should be measured based on time and performance criteria for the end users to use a (EHR) capability to perform their work tasks within their work flow environment and if it benefits patient’s service and safety outcomes. Pediatric hospitals do not equate to all hospitals and clinics. They have different work flow operational requirements.

Reference

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Edwards, P. J., Moloney, K. P., Jacko, J. A., Saintfort, F. (2008), Evaluating usability of a commercial electronic health record: A case study International Journal of Human-Computer Studies 66. (2008) 718–728. http://www.sciencedirect.com.ezproxyhost.library.tmc.edu/science/article/pii/S1071581908000736 (Accessed on 23 Oct 2015)