Physician burnout tied to dissatisfaction with EHR implementations can lead to inaccurate and incomplete documentation. That in turn can have financial, reputational and clinical repercussions. The Rush University Medical Center in Chicago, has spent the last few years applying analytics to address documentation issues.
“When you look at the data, physicians hate the EMR and find documenting in it is a chore,” said Bala Hota, M.D., Rush’s chief analytics officer. "They are at home at night after work, and that is when they do a lot of charting. One thing we have found is that documentation accuracy and completeness really suffers.”
If documentation is not complete, co-morbidities that go into the risk adjustments for CMS quality measures and other rankings may be left out. Academic medical centers tend to rank lower in some of those rankings than community hospitals, Hota said. “This may be due to actual quality, but our sense is that some of it is due to documentation gaps,” he said. “We have found variation between service lines and providers in quality of documentation and accuracy.”
Rush, a 664-bed academic medical center, has developed an algorithm that can, based on the hospital encounter and past diagnoses and history, signal whether there is a missed opportunity for diagnoses that are known to be present and are actively managed. “We have built analytics by service line and provider, and done a lot of education and partnered with our clinical documentation improvement program, and we have seen benefits. That has transferred to a better capture of co-morbidities across our service lines.”
Rush, which has worked with SCIO Health Analytics on many of its analytics efforts, can show clinicians the impact better documentation can have. “The expected mortality rate changes by a certain amount based on the missing comorbidity data, and we can give hard evidence by this algorithm we’ve developed, and we have seen meaningful improvements. The clinicians understand the rationale for clinical documentation improvement and how it impacts them in the long run. Quality is a key part of this. It is not just financial.”
As a result of this initiative, Rush has achieved a value-based care program bonus, rather than a penalty, and improved its Medicare STAR rating from 3 to 5.
Some of the focus is on the numerator — the quality outcomes, but there is also a focus on the denominator — the risk adjustment. “We feel it has paid off on the inpatient side,” he said. “We just joined an MSSP program in January. We are bringing this savvy and thoughtfulness of key drivers of the measures and how you bring analytics to the end users to our ACO planning. We are looking at the MSSP data that comes in and run that through this modeling so we can make overall assessments of where we can drive that intervention.”
Rush also is taking a multi-pronged approach to helping ease the documentation burden on physicians. “This problem is the underbelly of all the EMR implementation we have done nationally, and I am an informaticist, so I get it,” Hoda said. It is looking at voice-to-text dictation and natural language processing (NLP) capability. “We have very advanced analytics we are building out around NLP,” he said. “We have a data lake we have built using Cloudera and Hadoop. It gets a stream of provider notes data in real time that we are running an NLP engine against and that can drive machine learning models.”
Speaking of machine learning, Hoda said Rush wants to be an “AI-first” institution. “I know there are a lot of buzz words around machine learning and AI,” he stressed. “But we take this very seriously. I think the opportunities we have for machine learning are a paradigm shift. The ability to take data we already have and very quickly and iteratively train models that become useful is transformational.
In the next couple of years there are going to be self-service tools. We feel from a strategic perspective, it is critical for us to have the platform built out and we are constantly iterating on what our services are, and getting the analytics and bringing it to the data, rather than the other way around, which is how it has often been in the past.”
Hota stressed, however, that none of this would be useful unless they can integrate it back into the EHR and the workflow. “If we can use the models and output to actually influence physician behavior and be that early warning systems for providers, that is where the value is. We are looking at the standards out there in terms of how we orchestrate that integration. For instance, if we have a great sepsis model, how do we put that into the workflow so they get the info they need when they need it?”
Rush is an Epic shop and already has a robust clinical decision support mechanism within the EHR. It offers best-practice alerts based on triggers for key clinical areas. In addition, Rush use registries for population health, and those registries display the output of risk models such as high risk for readmission or complication. “What is missing from those is the platform for that next level of analytics,” Hota said, which creates a learning system that is constantly changing and feeding back into the EHR. “In a few years there will be a mature platform for this, but for us to implement now, we would be building a solution that would allow that integration from the external system back into Epic.
It’s not clear whether that platform would work beside the clinical decision support or supplant it. “For now, based on the use case, there are different solutions,” Hota said. “For the end user it may look the same — as an alert, but on the back end, where the logic exists of detecting the risk — is it in Epic or is it outside in a machine-learning model? We are looking at a hybrid approach called a predictive modeling framework. Some of the work is in the governance around what we use for which use case and making sure the end user doesn’t get overwhelmed by all these different things. It has to be invisible to them.”