The logic in the argument would appear to be unassailable: given that there is inevitably a range in the experienced functional quality of various electronic health record (EHR), computerized physician order entry (CPOE), and other clinical IT products on the market, the better the quality of particular clinical IT products, the more fully those individual products should help hospital organizations to achieve better patient safety and care quality, correct? Indeed, such assumptions would seem to be supported by the results of such important industry resources as the regular reports coming out of the Orem, Ut.-based KLAS Research, for example.
Yet the results of a new study by researchers at the Falls Church, Va.-based CSC Healthcare seem to portray a far more complex picture of what actually happens once hospitals implement clinical information systems. Despite the strong value of knowing the quality rankings of various products, some CSC researchers recently found that, when it comes to the end results around reductions in physician order entry-related medical errors, researchers found only a slight correlation between buying a “quality” EHR/CPOE product and error reduction. In other words, the bulk of the difference in actual error reduction relates to a complex knot of processes and issues separate from the brand name of a particular system an organization is implementing or has implemented. But very bluntly, a hospital can buy a very “high-quality” EHR product and customize it very poorly; or take a “mediocre” product and achieve considerable success with it.
So, here's what happened: between April and August 2008, leaders from 81 U.S. hospitals completed a detailed assessment of their EHR/CPOE systems, making use of a very specific methodology developed by the CSC researchers. That methodology required a designated team in each participating hospital having to perform a self-assessment that required that team to download instructions and information profiles for 10-12 “test” patients. The team then downloaded about 50 test orders, instructions, and observation sheets to be used in the assessment. A participating physician then entered test orders for the test patients into the organization's EHR, and observed and noted any guidance provided by the decision support within the product (such as calculated drug dose, displayed message or alert, etc.). The team then entered the results obtained for each test order (decision support received or not), and the assessment tool then instantly computed an overall score for the percentage of test orders identified, as well as the score for the orders in each adverse drug event category. The process takes six hours or less.
The result? Scores for individual hospitals on the successful detection of test orders that would have caused an adverse drug event in an adult patient varied dramatically, from 10 percent to 82 percent of test orders detected. Scores for the top 10 percent of hospitals (six hospitals) ranged from 71 to 82 percent; while scores for the six hospitals with the lowest scores ranged from 10 to 18 percent. The overall mean was 53 percent detection of test orders that would otherwise have caused an adverse drug event.
And, most dramatically, the results, spread across eight different vendors, with full “blinding” to the researchers (in other words, the researchers analyzing this data did not know which hospitals were using which vendor products) showed absolutely no correlation between any particular vendor product and the test order detection rate. The results were published in an article in the April issue of Health Affairs.
It's all about process
So what does all this mean? Lead researcher Jane Metzger, who is principal researcher in the Waltham, Mass.-based Emerging Practices division at CSC, calls the results “very helpful.” The reality, Metzger says, is that the ways in which any hospital organization implements any EHR product with CPOE will have a tremendous influence on the success of that implemented product in averting medication and other medical errors. “No decision support comes fully implemented out of the box from any vendor product,” Metzger stresses. “What's key here is the configuration required to set up decision support within a hospital's clinical information system in order to avert adverse events. The elements involved include not just alerts, but, for example, advisories, such as an advisory that comes with an order that says, this particular patient has reduced renal function, and here's the patient's latest creatinine level. Or it could be a display of recommended doses of particular medications for certain kinds of patients.”
Now, all this having been said, vendor choice was not completely irrelevant to the situations the CSC researchers studied; in fact, 27 percent of the variation in performance they observed was correlated with vendor choice. As the authors wrote in their Health Affairs article, “There is good statistical evidence to suggest that choice of vendors does have some positive effect on performance.” Meanwhile, teaching status accounted for 10 percent of the variation in performance, while hospital size and affiliation (whether a hospital was part of a system or not) had zero impact. As the authors wrote in the Health Affairs article, “There are multiple possible explanations for the observed correlation between hospital teaching status and performance. These include such factors as research interest and having more staff resources to invest.”
So, what should hospital and health system CIOs think about all this? “I would say that, as the Health Affairs article discusses, because successful configuration of clinical decision support is not out of the box, it takes a new process, and groups of people working on this, and learning from other peer hospitals how to do this, and keeping at it,” in order to succeed,” says CSC's Metzger. “It's really that process of incorporating clinical knowledge, some logical knowledge, into the system. So the message to the CIO is, who's involved and what are they doing? Is the P and T Committee involved? Do we have a plan for constantly moving forward with decision support? And how often is an order changed as a result of this device? It's about actively working on those questions.”
What's more, Metzger says, “It's really about the nurses and physicians telling the techno-geeks how to embed technical knowledge so it helps the clinicians avert mistakes.” Among the challenges, she notes, is the mountainous one of providing sufficiently robust clinical decision support at the point of care that can actually help physicians avert a majority of potential avoidable errors, no mean feat in today's context of functionality The list of issues shown in figure 1 provides a very good start in terms of elements to look at, she notes.
In the end, Metzger says, it's clear there is no easy short-cut to achieving success in this critical area. At least, the guidelines for meaningful use under the federal American Reinvestment and Recovery Act/Health Information Technology for Economic and Clinical Health (ARRA-HITECH) Act will provide some clarity for clinician and IT leaders going forward.
Healthcare Informatics 2010 June;27(6):74-76