Decision support not an exact science

From Clinfowiki
Jump to: navigation, search

Schuerenberg BK. Decision Support Not an Exact Science. Health Data Manag. 2007 Nov;15(11):34, 36, 38.

This article provides an overview of a number of the implementation challenges faced by several different provider organizations as they move forward with adding clinical decision support functionality to their existing computerized provider order entry systems.

At first glance, clinical decision support seems to be a fairly straightforward endeavor, a mechanism to help guide providers through order-entry in a manner that corresponds with ongoing quality improvement activities throughout the rest of the health system. While this approach may seem straightforward initially, many organizations are discovering just how complex a task they have undertaken. Many organizations have discovered that, by implementing decision support, they have neglected a thorough review of each workflow that may be impacted by a series of new clinical alerts. Further, many have determined through experience after the fact that there truly is a fine line between providing clinical guidance and hindering workflow to the point that clinicians become annoyed and completely ignore all alerts.

Many healthcare provider organizations fail to realize the complexities and intense resource requirements inherent to implementing clinical decision support, and they spend a great deal of time modifying those tools post-implementation in an ongoing effort to make safety alerts more tolerable to clinical providers.

Several facilities have recruited their current staff physicians in a paid role to help develop and refine alerting functionality specific to medication orders, a task they’ve agreed to take on in addition to their daily patient care responsibilities. The intent behind this approach is to guide providers toward improved clinical practice, and to encourage compliance with best-practice concepts that have been proven and generally accepted. While this approach may prove difficult, even more challenging is when the organization elects to take it a step further by building consensus among providers in the development of order sets. Provider organizations typically have hundreds of order sets distributed throughout the hospital. Often these care guidelines are specific to a particular unit or patient population, and little is known about how things are done outside of that environment. Many times, order sets are kept in large filing cabinets, and the only person on the unit who knows anything about them is the unit secretary. Sometimes, order sets are not documented anywhere, and are simply communicated downstream to younger clinical staff from the “old timers” within the walls of the facility.

The majority of the time, provider organizations have no idea how many or what types of order sets are currently in use at their own facilities. Additionally, most organizations have no process in place for an ongoing review of order set content, or any way to determine the last time the orders contained in a set were validated for accuracy and compliance with current practice guidelines.

Most clinical systems currently available on the market include some form of clinical alerting functionality, but whether or not those alerts are appropriate but non-intrusive is often another story entirely. These systems frequently include dosing calculators and embedded links to online clinical reference information. But one of the most essential aspects of clinical support is the ability to generate reports against the alerts most frequently displayed, and the actions taken by clinical providers in response to viewing a particular alert.

One approach in use at many provider organizations is the ability to embed an “exclusion order” within an order set. Current clinical guidelines can suggest to the clinician during order-entry that a particular order is recommended based on the patient’s diagnosis. If the provider fails to include that order, the system can prompt the user to specify a reason why the order was not entered. If physicians override a particular alert, the system can present a follow-up screen directing them to document why they chose to not follow the best practice, rather than tracking down each provider and selecting his/her rationale after-the-fact.

But development and ongoing revision of clinical alerts is typically a very resource intensive effort, requiring a significant amount of clinician time and effort to tweak and perfect the alerts displayed to the clinical end-user.

Commentary

When reviewing the workflows of typical clinicians in the inpatient environment, the number of alerts they may be required to view and respond to in a given day can be overwhelming. Additionally, requiring clinical users to comply with clinical practice guidelines in an electronic health record presents a major culture shift to which they are not accustomed. Frequently, clinicians must be reminded that their responses to clinical alerts are retained as a permanent part of the patient’s medical record, and could conceivably be considered discoverable material if their actions were ever called into question due to litigation. This possibility presents another argument for standardizing responses to clinical alerts, discouraging the practice of allowing free-text entries into any segment of the patient’s electronic chart.

Conclusion

Based on the experiences outlined in this article by a number of healthcare provider organizations, the accuracy and reliability of clinical decision support alerts is dependent on a number of human and technology factors. This article emphasizes the need to filter the clinician’s alert experience to avoid what is commonly known as “alert fatigue.” It is essential to involve clinicians throughout the initiation of clinical decision support on the front end of the project, but it is equally as important to periodically evaluate the relevance of those alerts on an ongoing basis. Ensuring accurate clinical alert functionality is an ongoing process, and involves much more effort than simply turning on alerts and expecting clinicians to follow the lead.

Kevin Connett