A randomized trial of electronic clinical reminders to improve medication laboratory monitoring

From Clinfowiki
Jump to: navigation, search

Recommendations for routine laboratory monitoring to reduce the risk of adverse medication events are not consistently followed. We evaluated the impact of electronic reminders delivered to primary care physicians on rates of appropriate routine medication laboratory monitoring.

Matheny et al., Clinical Reminders for Medication Laboratory Monitoring. J Am Med Inform Assoc. 2008;15:424–429. DOI 10.1197/jamia.M2602.


Recommendations for routine laboratory monitoring to reduce the risk of adverse medication events are not consistently followed. We evaluated the impact of electronic reminders delivered to primary care physicians on rates of appropriate routine medication laboratory monitoring.


Annual reminders were sent only regarding patients who had been on a maintenance medication continually for greater than one year, and who did not have appropriate laboratory tests done within the past year. The warnings occurred during a six month time period (Jan-Jun). This may limit generalizability of the study. It would be interesting to know if serious adverse effects are more related to failure to obtain appropriate baseline studies or failure to obtain maintenance monitoring studies.


We enrolled 303 primary care physicians caring for 1,922 patients across 20 ambulatory clinics that had at least one overdue routine laboratory test for a given medication between January and June 2004. Clinics were randomized so that physicians received either usual care or electronic reminders at the time of office visits focused on potassium, creatinine, liver function, thyroid function, and therapeutic drug levels.

This was not true randomization. The practice sites were randomized in a stratified manner (to balance the type of site and the socioeconomic level of the patient population) but not the individual providers. The outcomes were related to the behavior of the individual providers rather than the aggregate clinic.

The reminders appeared passively within one window on the top of the summary screen but not a pop-up dialog box. The font size and color was the same as other information in other windows. Only the area surrounding the label (“Reminders”) identifying the window was red. A physician could have easily ignored the reminder. This possibility was considered by the authors.

There were apparently 464 physicians in these sites. It is not clear why the other 161 were not enrolled. Did the 303 physicians consent to be enrolled while the others did not consent? Was there some significant difference between these two groups of physicians? This was not examined. Even though the physician sites were randomly assigned to the two conditions, it would seem all the physicians understood they were being observed and knew the purpose of the study. This would have increased the likelihood they would order the appropriate tests.

Measurements (Outcomes)

Primary outcomes were the receipt of recommended laboratory monitoring within 14 days following an outpatient clinic visit. The effect of the intervention was assessed for each reminder after adjusting for clustering within clinics, as well as patient and provider characteristics.

Is it possible that the patient had the laboratory work done in a lab outside the Partners system and its central data repository? The authors considered this possibility and did not think it was very likely. If done outside the Partners system, the physician may have been aware of the result and correctly not reordered the test but had no way to explain that in the system.

The proxy outcome is whether the patient went to the laboratory for the test and it was performed. This depends on the patient’s behavior that is outside of the physicians’ control. An intermediate process outcome is whether the physician issued an appropriate laboratory order for the patient at the time of the encounter. There was probably not a computerized physician order entry system in place that could have captured/analyzed this information within the outpatient clinic system.

The true outcome the authors considered was reduction of serious adverse events related to not monitoring labs regularly and hospitalizations related to these adverse events. These outcomes were not measured although these are very rare events and would require a much larger sample population to detect a significant difference.


Medication-laboratory monitoring non-compliance ranged from 1.6% (potassium monitoring with potassium-supplement use) to 6.3% (liver function monitoring with HMG CoA Reductase Inhibitor use). Rates of appropriate laboratory monitoring following an outpatient visit ranged from 14% (therapeutic drug levels) to 64% (potassium monitoring with potassium-sparing diuretic use). Reminders for appropriate laboratory monitoring had no impact on rates of receiving appropriate testing for creatinine, potassium, liver function, renal function, or therapeutic drug level monitoring.


We identified high rates of appropriate laboratory monitoring, and electronic reminders did not significantly improve these monitoring rates. Future studies should focus on settings with lower baseline adherence rates and alternate drug-laboratory combinations.

This was a negative study that demonstrated no benefit to this type of electronic reminder. The initial high baseline compliance rate was present and may have been close to a “ceiling” rate. A more active reminder system (alerts that must be responded to) could also be considered to further increase the compliance rates closer to the expected “ceiling” or perhaps exceeding it. This active alert could include a button that would have printed out the appropriate laboratory order.

John F. Bober, MD