EHR-enabled Research

From Clinfowiki
Jump to: navigation, search


Analogous rises in prospective trial costs and available clinical data have fueled interest in innovating clinical research efficiency. Recent U.S. investment in healthcare information technology has lead to near ubiquitous use of Electronic Health Records (EHR) and an exponential increase in clinical data[1]. This has encouraged this use of clinical registries in comparative effectiveness research and even in clinical decision-making[2][3][4]. However, while retrospective data analysis will undoubtedly play an important role in the learning health care system, prospective, randomized clinical trials (RCTs), which are the gold standard for experimental research, will always serve an essential role[5]. RCTs have produced the highest level of evidence for clinical medicine, though at best, represents evidence for 20% of current clinical practices [6]. With the rising costs of clinical trials threatening to crowd out clinical discovery, innovating new methods for conducting affordable randomized trials has been deemed “mission critical”, and the EHR identified as a powerful potential tool for prospective research[7]. Though the concept of integrating prospective research into clinical workflow is not new, it is the very recent critical mass of electronic information infrastructure that has made the platforms readily available; Meaningful Use and the rapid adoption of EHRs have intersected with a critical era of clinical research[8]. While regulatory and ethical issues remain dynamic topics, EHR-enabled RCTs represent a “disruptive technology” to lower costs of by leveraging existing infrastructure and clinical data collection[9].


While several uses of the EHR-enabled research have been described, one of the most practical methods described has been in trial recruitment[10]. Given the wide array of discrete data elements within the EHR, automated identification of potential trial participants requires simple filtering mechanisms in order to single out patients as they meet predetermined criteria[11][12]. Researchers have even conducted meta-research on this topic; examining different approaches to recruitment, including EHR generated patient lists and notification to clinicians[13]. Evidence of the expected return can be seen in significant investment to build robust systems to utilize both retrospective and prospective data in aiding recruitment[14].

Point of Care Clinical Trials

While identification of eligible cohorts is an attractive use of clinical registries and EHRs, some researchers have taken the use one step further to conduct randomized clinical trials at the point-of-care. Authors of the Thrombus Aspiration in ST-Elevation Myocardial Infarction in Scandinavia (TASTE) trial tapped into the existing workflow and data collection of the Swedish Angiography and Angioplasty Registry (SCAAR) platform to randomize 5,000 patients with ST-elevation myocardial infarctions (STEMI), representing 60% of all STEMI patients referred for PCI in Sweden and Iceland during the study period[15]. The SCAAR registry is part of the Swedish Web-system for Enhancement and Development of Evidence-based care in Heart disease According to Recommended Therapies (SWEDEHEART), a nation-wide registry developed by the late Professor Ulf Stenestrand in response to a growing interest in tracking ischemic heart disease. In fact, there had been a 100% voluntary adoption of preceding clinical registry in Sweden, the Swedish Register of Information and Knowledge about Swedish Heart Intensive Care Admissions (RIKS-HIA). In this trial, coordinators verbally consented patients after angiographic evidence of a STEMI had been confirmed, and used a randomization module within the registry to assign patients to either thrombus aspiration plus percutaneous coronary intervention (PCI) or PCI alone. Researchers were able to achieve an astounding 100% follow-up rate, streamline data-capture, all at an unheard-of incremental cost of $50 per patient. Authors of this study proposed the term “randomized clinical registry trial” to describe their use of a clinical registry to enable a prospective trial; a methodology that generated an enormous amount of subsequent interest in the efficiency of this type of trial[16].

In the United States, researchers with the SAFE-PCI for Women trial similarly benefitted from existing registry infrastructure to reduce data acquisition costs in a randomized trial comparing femoral versus radial access for PCI[17]. Use of the CathPCI Registry in this trial was enabled via an NIH funded, Duke-ACCF collaboration built through the National Cardiovascular Research Infrastructure (NCRI). Authors used CathPCI Registry to populate demographic data into a trial-specific database which is then supplemented with case report form (CRF) data specific to the trial, resulting in an estimated 65% decrease in site coordinator workload. Both of these registry-based RCTs represent novel uses of clinical registries in research and make important inroads into reducing operational barriers to clinical trial support. Still, these methods did not eliminate all redundant data entry, illustrating the need for further meshing of clinical and research workflow. As clinical workflow increasingly lives in the EHR, conducting research through it has inherent advantages over independent clinical registries.

Many vendor EHRs contain functionality that, with minimal customization, could be used for prospective research. Using the EHR directly for point-of-care randomization has the significant advantage of already containing most clinical workflow and data. D’Avolio et al launched the first Department of Veteran’s Affairs point-of-care clinical trial (POCCT) using the VA’s EHR, Computerized Patient Record System (CPRS)[18][19]. In this study, an alert was delivered to clinicians during their standard clinical workflow, notifying them of eligible patients at the point of care. As seen in the screen clip from the paper, a clinician prescribing insulin is offered the option to enroll the patient in this RCT comparing two insulin regimens.

The authors highlight the importance of aligning the trial processes with clinical care and with the functionality of the EHR. As seen in Table 2 of their paper, authors mapped these processes to one another and tried to avoid customization to maximize generalizability across VA institutions. Furthermore, this study generated automatic documentation to minimize workload for clinicians and study personnel. These concepts should be considered in the list of best practices that will likely emerge from early experiences. They do note that they lacked a centralized database with complete data elements needed for the trial. The authors also take care to acknowledge that the methods in this trial very well may not make sense in other clinical settings such as mental health care. Additional important operational considerations for EHR POCCTs include research-specific billing, the use of a patient portal for patient-reported outcomes, integration of case report forms into the EHR, and the reliability of existing EHR data[20]; each of these topics merit careful consideration and further study. While the VA POCCT trial has important scalable implications given the scope of the VA health system, as several commentaries have noted, issues of clinical equipoise, ethics, workflow burden and informed consent are perhaps all the more important because of the scale[21].

Ethics, Consent, Quality Improvement and Research

Informed consent remains a lightening rod in comparative effectiveness research. In a recent NEJM Perspective, Magnus et al address the OHRP’s reprimand of the Surfactant, Positive Pressure, and Oxygenation Randomized Trial (SUPPORT) trial’s authors[22][23]. This trial randomized neonates to either low or high oxygen saturations with both of arms’ parameters within the current standard of care[24]. In contrast to the outcry regarding the potential risks of this trial, the authors of this perspective piece argue that this was a situation of clinical equipoise, and research and randomization do not inherently expose participants to additional risk. Commenters replied that this study’s design might have increased the likelihood that neonates would receive oxygen saturations at the extreme ends of the spectrum, which, despite residing within standard of care, would be more risky than clinician-directed standard of care. The nuances of this discussion reveal some of the pertinent issues around consent and usual care, we will benefit from continued discussion about operational definitions of clinical equipoise and systematic approaches to clinical scenarios that do not fit this definition. In additon, as the lines between clinical research and quality improvement are increasingly blurred, we need further discussion and agreement, perhaps via the OHRP, of what defines research and what defines quality improvement. In response to the Institute of Medicine’s call for a Learning Healthcare System, defined a system “in which knowledge generation is so embedded into the core of the practice of medicine that it is a natural outgrowth and product of the healthcare delivery process and leads to continual improvement in care”, Faden et al. etch an ethical framework that protects what is “important to patient”, while reducing the barriers to conducting comparative effectiveness research[25]. Faden and colleagues also address the crucial topic of consent in randomized comparative effectiveness studies[26]. Notably, they believe that consent should not necessarily always be required; current practices are often burdensome in ways that prevent research that patients likely value tremendously (e.g. research on improving patient safety), while the protections surround issues that are not important to patients (e.g. informed consent about an intervention that poses no additional risks). These discussions about the ethics of research and quality improvement in the learning healthcare system could not be timelier given the intersection between the EHR and comparative effectiveness research. A recent article outlines a national strategy for this type of research, noting the rising opportunity to study these clinical questions[27].

Quality improvement in healthcare generally refers to efforts to improve processes that affect patient care and as such, do not fall within the purview of the Institutional Review Board (IRB). Research, on the other hand, is a systemic evaluation that seeks to improve the greater body of knowledge and is generally subject to oversight by the IRB. As the learning healthcare system quickly becomes a reality, increasingly enabled by the EHR, there are increasing examples of clinical discovery that do not neatly fit into quality improvement nor research, but often straddle between the two. Take the example of an effort to improve the recognition of acute myocardial infarction through EHR-base clinical decision support offering point-of-care suggestions for diagnostic workflow based on the pre-test probability for a particular patient. This could clearly fall into the realm of a quality improvement and not require IRB approval, thus not require the consent of patients who might receive this intervention. On the other hand, one could imagine a trial designed at the systemic evaluation of a particular clinical decision support that would be broadly applicable and easily generalizable to other institutions and thus fall more towards the realm of research. Independent of its classification, this type of intervention could certainly bear additional risks for patients; one could imagine a clinician claiming that the “computer didn’t say the patient had a heart attack, so I discharged him home”. Though they would only potentially receive informed consent if it were classified as research, patients may be subject to potential harms under quality improvement projects. There are good reasons for concern that electronic interventions can cause unintended consequences, even harm – a topic that merits its own discussion[28][29][30]. A prominent example of this type of harm was the unexpected increase in mortality after the implementation of a computerized physician order system at Children’s Hospital of Pittsburg[31]. The fact that subsequent implementations showed no increase or even decrease in mortality, speak to the importance of operational details in clinical informatics[32][33]. We clearly need to heed these cautionary examples, rapidly evaluate them, and continually refine best practices[34][35][36].


  1. D’Avolio LW. Electronic Medical Records at a Crossroads: Impetus for Change or Missed Opportunity? JAMA. American Medical Association; 2009;302(10):1109-1111
  2. Longhurst CA, Harrington RA, and Shah NH. A 'green button' for using aggregate patient data at the point of care. Health Aff (Millwood). United States; 2014;33(7):1229-35.
  3. Fiks AG, Grundmeier RW, Margolis B, Bell LM, Steffes J, Massey J, and Wasserman RC. Comparative effectiveness research using the electronic medical record: an emerging area of investigation in pediatric primary care. J Pediatr. United States; 2012;160(5):719-24
  4. Luce BR, Kramer JM, Goodman SN, Connor JT, Tunis S, Whicher D, and Schwartz JS. Rethinking randomized clinical trials for comparative effectiveness research: the need for transformational change. Ann Intern Med. United States; 2009;151(3):206-9.
  5. Benson K, and Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. UNITED STATES; 2000;342(25):1878-86.
  6. Tricoci P, Allen JM, Kramer JM, Califf RM, and Smith SC. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. United States; 2009;301(8):831-41
  7. Antman EM, and Harrington RA. Transforming clinical trials in cardiovascular disease: mission critical for health and economic well-being. JAMA. United States; 2012;308(17):1743-4
  8. Vickers AJ, and Scardino PT. The clinically-integrated randomized trial: proposed novel method for conducting large trials at low cost. Trials. England; 2009. p. 14
  9. Lauer MS, and D'Agostino RB. The randomized registry trial--the next disruptive technology in clinical research? N Engl J Med. United States; 2013;369(17):1579-81
  10. Simpson LA, Peterson L, Lannon CM, Murphy SB, Goodman C, Ren Z, and Zajicek A. Special challenges in comparative effectiveness research on children's and adolescents' health. Health Aff (Millwood). United States; 2010;29(10):1849-56
  11. Hawkins MS, Hough LJ, Berger MA, Mor MK, Steenkiste AR, Gao S, Stone RA, Burkitt KH, Marcus BH, Ciccolo JT, Kriska AM, Klinvex DT, and Sevick MA. Recruitment of veterans from primary care into a physical activity randomized controlled trial: the experience of the VA-STRIDE study. Trials. England; 2014;15:11
  12. Navaneethan SD, Jolly SE, Sharp J, Jain A, Schold JD, Schreiber MJ, and Nally JV. Electronic health records: a new tool to combat chronic kidney disease? Clin Nephrol. Germany; 2013;79(3):175-83
  13. Grundmeier RW, Swietlik M, and Bell LM. Research subject enrollment by primary care pediatricians using an electronic health record. AMIA Annu Symp Proc. United States; 2007;:289-93
  14. Ferranti JM, Gilbert W, McCall J, Shang H, Barros T, and Horvath MM. The design and implementation of an open-source, data-driven cohort recruitment system: the Duke Integrated Subject Cohort and Enrollment Research Network (DISCERN). Journal of the American Medical Informatics Association. BMJ Publishing Group Ltd; 2012;19(e1):e68-e75
  15. Fröbert O, Lagerqvist B, Olivecrona GK, Omerovic E, Gudnason T, Maeng M, Aasa M, Angerås O, Calais F, Danielewicz M, Erlinge D, Hellsten L, Jensen U, Johansson AC, Kåregren A, Nilsson J, Robertson L, Sandhall L, Sjögren I, Östlund O, Harnek J, and James SK. Thrombus Aspiration during ST-Segment Elevation Myocardial Infarction. N Engl J Med Internet. Massachusetts Medical Society; 2013;369(17):1587-1597.
  16. Fröbert O, Lagerqvist B, Gudnason T, Thuesen L, Svensson R, Olivecrona GK, and James SK. Thrombus Aspiration in ST-Elevation myocardial infarction in Scandinavia (TASTE trial). A multicenter, prospective, randomized, controlled clinical registry trial based on the Swedish angiography and angioplasty registry (SCAAR) platform. Study design and rationale. Am Heart J. United States; 2010;160(6):1042-8
  17. Hess CN, Rao SV, Kong DF, Aberle LH, Anstrom KJ, Gibson CM, Gilchrist IC, Jacobs AK, Jolly SS, Mehran R, Messenger JC, Newby LK, Waksman R, and Krucoff MW. Embedding a randomized clinical trial into an ongoing registry infrastructure: unique opportunities for efficiency in design of the Study of Access site For Enhancement of Percutaneous Coronary Intervention for Women (SAFE-PCI for Women). Am Heart J. United States; 2013;166(3):421-8
  18. D'Avolio L, Ferguson R, Goryachev S, Woods P, Sabin T, O'Neil J, Conrad C, Gillon J, Escalera J, Brophy M, Lavori P, and Fiore L. Implementation of the Department of Veterans Affairs' first point-of-care clinical trial. J Am Med Inform Assoc. United States; 2012;19(e1):e170-6
  19. Fiore LD, Brophy M, Ferguson RE, D'Avolio L, Hermos JA, Lew RA, Doros G, Conrad CH, O'Neil JA, Sabin TP, Kaufman J, Swartz SL, Lawler E, Liang MH, Gaziano JM, and Lavori PW. A point-of-care clinical trial comparing insulin administered using a sliding scale versus a weight-based regimen. Clin Trials. England; 2011;8(2):183-95
  20. Hersh WR, Weiner MG, Embi PJ, Logan JR, Payne PR, Bernstam EV, Lehmann HP, Hripcsak G, Hartzog TH, Cimino JJ, and Saltz JH. Caveats for the use of operational electronic health record data in comparative effectiveness research. Med Care. United States; 2013;51(8 Suppl 3):S30-7
  21. Weir CR, Butler J, Thraen I, Woods PA, Hermos J, Ferguson R, Gleason T, Barrus R, and Fiore L. Veterans Healthcare Administration providers' attitudes and perceptions regarding pragmatic trials embedded at the point of care. Clin Trials. 2014;11(3):292-299
  22. Magnus D, and Caplan AL. Risk, consent, and SUPPORT. N Engl J Med. United States; 2013;368(20):1864-5
  23. Wilfond BS, Magnus D, Antommaria AH, Appelbaum P, Aschner J, Barrington KJ, Beauchamp T, Boss RD, Burke W, Caplan AL, Capron AM, Cho M, Clayton EW, Cole FS, Darlow BA, Diekema D, Faden RR, Feudtner C, Fins JJ, Fost NC, Frader J, Hester DM, Janvier A, Joffe S, Kahn J, Kass NE, Kodish E, Lantos JD, McCullough L, McKinney R, Meadow W, O'Rourke PP, Powderly KE, Pursley DM, Ross LF, Sayeed S, Sharp RR, Sugarman J, Tarnow-Mordi WO, Taylor H, Tomlinson T, Truog RD, Unguru YT, Weise KL, Woodrum D, and Youngner S. The OHRP and SUPPORT. N Engl J Med. United States; 2013. p. e36
  24. Stevens TP, Finer NN, Carlo WA, Szilagyi PG, Phelps DL, Walsh MC, Gantz MG, Laptook AR, Yoder BA, Faix RG, Newman JE, Das A, Do BT, Schibler K, Rich W, Newman NS, Ehrenkranz RA, Peralta-Carcelen M, Vohr BR, Wilson-Costello DE, Yolton K, Heyne RJ, Evans PW, Vaucher YE, Adams-Chapman I, McGowan EC, Bodnar A, Pappas A, Hintz SR, Acarregui MJ, Fuller J, Goldstein RF, Bauer CR, O'Shea TM, Myers GJ, Higgins RD, and SUPPORT Study Group of the Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network. Respiratory Outcomes of the Surfactant Positive Pressure and Oximetry Randomized Trial (SUPPORT). J Pediatr. United States; 2014;165(2):240-249.e4
  25. Faden RR, Kass NE, Goodman SN, Pronovost P, Tunis S, and Beauchamp TL. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics. Hastings Cent Rep. United States; 2013;Spec No:S16-27
  26. Faden RR, Beauchamp TL, and Kass NE. Informed consent, comparative effectiveness, and learning health care. N Engl J Med. United States; 2014;370(8):766-8
  27. Concannon TW, Guise JM, Dolor RJ, Meissner P, Tunis S, Krishnan JA, Pace WD, Saltz J, Hersh WR, Michener L, and Carey TS. A national strategy to develop pragmatic clinical trials infrastructure. Clin Transl Sci. United States; 2014;7(2):164-71
  28. Campbell EM, Sittig DF, Ash JS, Guappone KP, and Dykstra RH. Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. United States; 2006;13(5):547-56
  29. Weiner JP, Kfuri T, Chan K, and Fowles JB. "e-Iatrogenesis": the most critical unintended consequence of CPOE and other HIT. J Am Med Inform Assoc. United States; 2007;14(3):387-8; discussion 389
  30. Bernstam EV, Hersh WR, Sim I, Eichmann D, Silverstein JC, Smith JW, and Becich MJ. Unintended consequences of health information technology: a need for biomedical informatics. J Biomed Inform. United States; 2010;43(5):828-30
  31. Han YY, Carcillo JA, Venkataraman ST, Clark RS, Watson RS, Nguyen TC, Bayir H, and Orr RA. Unexpected Increased Mortality After Implementation of a Commercially Sold Computerized Physician Order Entry System. Pediatrics. American Academy of Pediatrics; 2005;116(6):1506-1512
  32. Del Beccaro MA, Jeffries HE, Eisenberg MA, and Harry ED. Computerized provider order entry implementation: no association with increased mortality rates in an intensive care unit. Pediatrics. United States; 2006;118(1):290-5
  33. Longhurst CA, Parast L, Sandborg CI, Widen E, Sullivan J, Hahn JS, Dawes CG, and Sharek PJ. Decrease in Hospital-wide Mortality Rate After Implementation of a Commercially Sold Computerized Physician Order Entry System. Pediatrics. American Academy of Pediatrics; 2010;126(1):14-21
  34. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, Spurr C, Khorasani R, Tanasijevic M, and Middleton B. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. United States; 2003;10(6):523-30
  35. Samal L, Wright A, Healey MJ, Linder JA, and Bates DW. Meaningful use and quality of care. JAMA Intern Med. United States; 2014;174(6):997-8
  36. Sittig DF, Ash JS, Zhang J, Osheroff JA, and Shabot MM. Lessons from "Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system". Pediatrics. United States; 2006;118(2):797-801

Submitted by N. Lance Downing, MD