Estimating the impact of deploying an electronic clinical decision support tool as part of a national practice improvement project

From Clinfowiki
Revision as of 23:57, 20 October 2019 by Lwarsame (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Review: Ellen K. Kerns, Vincent S. Staggs, Sarah D. Fouquet & Russel J McCulloh. Estimating the impact of deploying an electronic clinical decision support tool as part of a national practice improvement project. J Am Med Inform Assoc. 2019 Jul 1;26(7):630-636. doi: 10.1093/jamia/ocz011.

Introduction In today’s world where smartphones are ubiquitous to our lives, clinicians also use them in clinical setting to look up information, calculate risk scores and make decisions. ‘ Mobile device-based electronic clinical decision support (mECDS) tools are increasingly being developed and used to disseminate evidence-based recommendation’ to be used by clinicians. These mECDS tools are not tied to any EHR and therefore used in different settings.

Research in the impact of these tools on clinical outcomes are minimal. Most studies in the past focused on distribution, usability, acceptability or perceptions. Two studies that tried to evaluate impact include one that compared test score of providers using mECDS tools versus those using standard reference tools or memory. Another study evaluated the impact of mECDS tool into an ongoing antimicrobial stewardship program. This study evaluated impact in terms of pre and post intervention and didn’t account for variations in usage or level of usage.

The American Academy of Pediatric (AAP) Value in Inpatient Pediatric (VIP) Network noted a variation of clinical practice in the evaluation of infants <60 days of age presenting with fever. To combat this variation in clinical practice a nationwide project called Reducing Excessive Variation in Infant Sepsis Evaluation (REVISE) was implemented. 133 sites across US participated and received a change package with evidence-based recommendations. The package included order sets, education presentation, academic detailing (posters, handouts) and PedsGuide app. The PedsGuide app included the mECDS tool called Febrile Infant which provided physician with stepwise guidance. The study’s objective was to “assess not only the distribution and use of the app, but also to pair such data with large-scale clinical practice and health outcomes data” thereby evaluating if an association between the ‘mECDS tool usage and clinical practice patterns’ was present.

Material and Methods Febrile Infant app was developed by an interprofessional team with the intent of aligning app content with REVISE outcome metrics. The REVISE study was conducted in 133 sites across US. Baseline data was collected via chart review that included REVISE core compliance metrics (9/2015 – 8/2016). There was a period of 3 months where data collection was optional as hospital worked on implementing the change package and allocating resources to the study. Then 12 monthly cycle of post implementation data was collected from 12/2016 to 11/2017. REVISE core compliance metrics in the study were appropriate admission, appropriate length of stay (LOS), Urinalysis use, appropriate antibiotics use and missed serious bacterial infections.

The Febrile Infant app was available from 11/2016 on Apple and GooglePlay. The app was introduced to the REVISE site via webinar on 12/2016 and subsequently promoted via email listserv. App usage was collected from 12/2016 to 11/2017 using Google Analytics. Information recorded by Google analytics included number of devices app was open in (users), the number of times the app has been opened/used (sessions), duration the app was used for each session and the number of times each button/screen within the app was touched (events). Sessions were restricted to only those touching the febrile infant specific app page and events to whether the user viewed content related to the 5 compliance metrics. Google analytics also captured the latitude and longitude of the cell tower or WiFi hotspot, this data was de-identified and aggregated regionally to Designated Market Area. The DMA of each REVISE site was assigned by spatial join with map of the DMA boundary. Site that were within a DMA boundary were then assigned to the DMA. 2 measures were derived to calculate association to outcome metrics. The first metric, hit per case, was calculated for each DMA month. It was calculated by dividing the total number of metric related screen views by total number of febrile infant cases reported within the DMA for the month. This served as a rough estimate of monthly app usage density for the DMA. The second measure, cumulative prior metric hits per site, reflected the gained knowledge of app use over time. This was calculated by summing metric related screen views by users in the DMA across months divided by the number os site in the DMA.

RESULTS Of the 133 sites, 10 dropped out due to incomplete data or no data. The sample analyzed was from 123 sites in 64 DMAs. The number of cumulative metric hits in a DMA per site was a statistically significant predictor of site performance for three out of the five metrics. With each increase of 200 cumulative hits per site there was an associated 12% increase in odds of appropriate admission, a 20% increase in odd of appropriate LOS and an 18% decrease in odds of inappropriate chest xray. This only constituted a small change from baseline for the 3 metrics in rates or appropriate admission, appropriate LOS and inappropriate chest xray. These were increases of 2%, 2.8 % and decrease of 2.8% respectively. The number of metric hits per febrile case was also statistically significant predictor of the same 3 metrics. For each additional 10 metric hits per case there was an associated 18% higher odds of appropriate admission, 36% higher odd of appropriate LOS and 26% lower odds of inappropriate chest xray.

DISCUSSION This study identified an association between geographically aggregated mECDS tool usage data and clinical practice data. The results suggested that app facilitated learning over time thus use decreased over time but initial use remained vital. Novel insights from the study include identifying a common unit of observation, the DMA, that facilitated the combination of two large scale disparate data sets. The study also uncovered an association between the presence of pediatric emergency physician and appropriate use of chest xray diagnostics (p=0.056).

LIMITATIONS This study analysis was ecological because app usage could only be measured on DMA level and outcomes at REVISE site, hence assuming that outcomes reflected app usage. Review of the DMA map also showed there was app use in areas where there were no REVISE sites, thus how accurate was the geolocation and also begs the question was app use occurring in other sites when attributed to REVISE site. The metric, hits per case, was likely inflated in larger sites because REVISE project guidelines capped febrile case collection at each site to 20 per month. Finally, it is possible that REVISE sites were being adherent to the change package and thus had increased use of the app during the study.

CONCLUSION mECDS tools are increasing being use in practice thus their impact on clinical practice should be evaluated. This paper with its limitations is the first one that has shown an association between usage and clinical practice. As technology improves the ability to accurately measure the usage and associate it with clinical practice will enable the advance of mECDS.

Submitted by (Leyla Warsame)