Jacob Jeppson is a data scientist for the Arch Collaborative at KLAS, which involves a group of healthcare organizations committed to improving the EHR experience through standardized surveys and benchmarking. He recently spoke with Healthcare Innovation about a recent KLAS report on the use of Signal, Epic’s provider-efficiency tracking tool.
Many health systems use Signal data to to pinpoint providers who may be struggling and could gain the most from EHR training interventions.
Healthcare Innovation: One of the key points of your report is that while the Signal data has effective and valuable uses, it's not a meaningful or predictive measure to discern whether providers are dissatisfied with the EHR or experiencing burnout or considering leaving the organization. Why is that?
Jeppson: We're often asked if there is a single metric coming from the Signal data that can help identify all the burned-out clinicians. For instance, one could be time in the EHR from 7 p.m. to 7 a.m. The issue with that metric is that some clinics are open later, so some of these times can be in that metric.
“Pajama time,” which is a different calculation, tries to take account of scheduled and unscheduled hours and correlate those pieces. But the problem is that not everybody spending pajama time is going to say that they're burned out. Some of those clinicians are happy with the EHR. They're not planning on turning over and not burned out, even though they're spending time in the EHR after hours.
We did publish an academic article that looked at the perception of time, which is a different metric. How much time do I feel like I am spending after hours charting, and if it was over five hours, we saw that there was a meaningful relationship to burnout.
But I like our survey for measuring this because we're directly asking the person if they are burned out, and that ends up yielding an accurate result, because they're going to tell you whether or not they're feeling those things, and then their perceptions around their EHR use is going to be an important metric. But what you measure in an EHR ends up being a difficult problem to solve in terms of how's it going to tie directly to this perception.
HCI: Are the health systems trying to find something that's going to help them identify people earlier in the process, something that's predictive to get to those people before they have some extended period of feeling burned out or consider going to work somewhere else?
Jeppson: Exactly. That's the million dollar question. If you have an algorithm that can accurately predict this piece, then you're going to be very capable of doing the interventions, right? You want to first identify somebody who could be at risk.
Signal can identify how you are different from people like you in your system. That ends up being useful in some cases, but what people really want is a predictive tool to say, ‘Okay, now I have a list of people that I know if I go talk to all of them, most of them are going to be burned out.’
HCI: Are most of these health systems looking both internally and looking at peers to compare themselves to?
Jeppson: In terms of the Signal tool, they do get charts that show where you're sitting compared to your peers internally and it also shows you where you sit compared to other Epic peers. Signal gives you ways to dissect the data. That gets difficult again, though. Is this gynecologist going to be the same as that gynecologist? Their workflows may be very different and therefore how their time is spent is inherently going to be different. But understanding outliers can be operationally useful. For instance, you may have voice recognition being used as a tool, and you may want to know who's using it and who's not.
HCI: Are some EHR users worried that the use of Signal will be punitive —that it's kind of a Big Brother watching over their shoulders and dinging them for inefficiencies. And if so, do the health systems work on overcoming that kind of anxiety about it?
Jeppson: I would say that in some cases, it could be punitive. I haven't seen any evidence of that directly in terms of the people using Signal in this way. The people we work say they make sure that this isn't a punitive conversation. It's more of an investigative conversation — because, again, maybe it's not tracking the right thing for that clinician. Doctors have a wide variety of preferences in terms of how they work. In some cases, they have a lot of autonomy in terms of how to document.
We are looking at the perception of user experience. Sometimes if 60 percent agree to a question, that is really, really bad; for other questions, it is really, really good. If only 60 percent say the EHR is reliable, that's a bad response, whereas if 60 percent of your doctors are saying the EHR helps efficiency, that would be a remarkable score. What we tried to highlight in the report are high performers that essentially have been able to outperform their peers on several of the metrics we track and we asked, qualitatively, what are you doing? They're opening up and saying this is how we're using Signal data to help these training aims, or help our efficiency gains.
The best performers are saying they make sure it's not punitive, which I'm going to assume means that it could be interpreted as punitive for some, especially coming from a central office or authority.
HCI: The KLAS Arch Collaborative has worked to identify these best practices, and among them was identifying areas that require more training or fine-tuning of their own training efforts and modules. That seems like a pretty valuable way to use this data.
Jeppson: What we have found is that somebody's experience with training ends up being a dramatic predictor of their satisfaction in the long term. And if you can improve training, you are also likely to improve that EHR experience. Many of our high performers are great with training, and that's how they're using this Signal tool because it's a great way to identify issues. It is being used as a conversation starter. They say, ‘There’s a new functionality rolled out; we see you're not using it. We’d love to show it to you if you need help using that piece.’ It's really useful.
We have a question in the survey: Does this EHR have the functionality I need? Even with no new functionality added, that score will go up if they're trained more properly to it. The EHR is a complicated system. Anytime you upgrade things or you try to improve it, it can be dissatisfying for these clinicians if they're not understanding why the change is happening, or what the new functionality can offer them.
Houston Methodist tracks trainings within the Signal tool. After someone has received training, they can look at the Signal data and ask: did they get better or worse at this metric that we trained on? Or is this training performing well or not? That ends up being another application of Signal that has developed.