A tool that helps more than 1,400 hospitals evaluate the effectiveness of their computerized provider order entry (CPOE) systems from a patient safety perspective is being revised to include the latest formularies, labs, and procedures. The updated platform also will share information on test results with EHR vendors and patient safety organizations.
David Classen, M.D., an associate professor of medicine at the University of Utah, described the changes during an Aug. 29 webinar sponsored by the Agency for Health Research and Quality (AHRQ), which is funding the Safe EHRs project work led by Classen and David Bates, M.D., executive director of the Center for Patient Safety Research and Practice at the Brigham & Women’s Hospital in Boston.
Hospitals take the CPOE evaluation as part of the Leapfrog Group Hospital Survey. It is a timed test that provides a hospital with a set of patient scenarios, along with a corresponding set of inpatient medication orders, that users enter into their hospital’s CPOE and related clinical systems. Those conducting the test record the warnings or other responses, if any, from their CPOE system and report the results back via the CPOE Tool website, and receive scoring and feedback summarizing the results of the test. The scenarios and test protocols include potential drug-drug or drug-diagnosis interactions, drug-allergy interaction, therapeutic duplication, and dosage errors. Hospitals are scored on their CPOE systems ability to alert prescribers to these common, serious medication errors.
The test was first rolled out in 2008, and the findings from early research were alarming. Although there was great variability among hospital systems, overall only 53 percent of fatal medication orders were picked up. The research saw enormous variability, even among hospitals using the same EHR vendor. “We learned that the variability had more to do with the hospitals themselves than the vendors,” Classen said. “The real impact was how the systems were customized and configured, not which vendor they chose. Newer data continues to show this.” He said these findings should impact how the industry approaches certification. We can’t look at a system on the shelf, but how it is operated, he stressed.
On the plus side, the hospitals that have taken the test have improved in terms of cutting down on adverse drug events. “The research has shown a 43 percent relative reduction in adverse drug events for every 5 percent increase in Leapfrog score, so it found a direct correlation on how they do on the test and overall rate of adverse drug events,” he said.
Classen said a new publication on the results should be published in about six months. “The bottom line is that hospitals seem to learn and improve from it, but they don’t learn and improve in every category,” he said. Many years into the test’s availability, there are still challenges in critical safety issues, such as failing to detect drug diagnosis problems involving pregnancy. Adjusting doses for lab values or age continues to be a challenge. “That is stunning given that Meaningful Use pushed hospitals down this road,” he said. “I would have expected an uptick in all these categories, but they continue to be a big problem.”
The test is now being enhanced with common hospital complications such as central line infection and deep vein thrombosis prevention. One new capability will be the inclusion of the i-MeDeSA instrument for usability testing of clinical decision support and a “wrong patient” order measurement tool developed by Jason Adelman, M.D., chief patient safety officer at Columbia University Medical Center.
Adelman, who also spoke during the webinar, has developed a tool to measure how often clinicians place an order on one patient, then quickly retract it and place the same order on another patient right away. His research found these “wrong patient” orders to be a significant problem and he and other researchers have been developing pop-ups to help clinicians placing orders double-check that they are indeed placing the order for the correct patient.
Adelman said that voluntary reporting greatly underestimates actual error rates and that automated tools for identifying errors show great promise, but more research is needed.
Classen said one lesson learned is that because therapies and systems are changing all the time, it is difficult for hospitals to stay current on where they are with alerts. “People assumed alerts were still on, but a system upgrade turned it off, and unsafe orders sailed right through,” he said. Another issue, he added, is that people start to rely on the system’s alerts too much and don’t act as a “safety net” for each other because they all assumed the system was checking. He said he recently queried some residents: If a common alert no longer popped up, would they believe there was a problem with the system or trust that the system was right? Most told him they would trust the system and think there was no problem. “It is the same issue with automation and airline pilots. We become too dependent on the system.”