Intermountain CMIO Stan Huff on the Need for Greater Interoperability: “We’re Killing Too Many People”

Dec. 6, 2018
Longtime Intermountain CMIO Stan Huff, M.D., recently chatted with Healthcare Informatics about all things interoperability, including the different types of data exchange that exist today, the greatest barriers, and pending regulations.

Stan Huff, M.D., chief medical informatics officer (CMIO) at the Salt Lake City, Utah-based Intermountain Healthcare for the past 31 years, has long been a top leader in his field. Working on the leadership team for a health system like Intermountain and serving as a co-chair of the HL7 Clinical Information Modeling Initiative (CIMI), while also having been a former member of the ONC Health IT Standards Committee, Huff has a wealth of knowledge coming from both provider- and standards-focused perspectives.

Huff, who represented Intermountain at a White House meeting on interoperability this week, recently chatted with Healthcare Informatics about all things interoperability, including the different types of data exchange that exist today, the greatest barriers, and how potential pending regulations could shake up the landscape. Below are excerpts from that discussion.

When you look at the interoperability landscape today, how bullish are you on where things stand, broadly speaking? Or rather than bullish, are you more concerned?

I don’t know if I am bullish or not, but I think we are making progress—and it’s significant progress. There is an incredible amount of work to be done. I’m not concerned at the progress; I am happy, but mindful of how much work is left to do to really reap the benefits that people are hoping for.

You’re currently a co-chair of the HL7 Clinical Information Modeling Initiative while also having been a member of former the ONC Health IT Standards Committee. How important is it to figure out the issues around standards before things can progress?

I wish it had a higher priority. Most of the time when people are talking about interoperability now, they are thinking about caring for an individual patient and thinking about sharing information between different systems that have information on that patient. They are usually thinking about EHR [electronic health record]-to-EHR for patient care—they have a very focused idea.

But there are other dimensions. There is interoperability relative to public health, meaning how we share data from an organization to a public health [entity] so that we understand what’s going on with a whole population relative to a particular disease.

There is also research interoperability, so we can share data that’s coming from research activities. And closely related to that is interoperability of clinical trial data and all of the randomized controlled trial data that comes with that.

Then there is interoperability that comes from devices and data coming from devices, which is a whole field onto itself. So you have to be careful when you talk about interoperability. This is one axis of interoperability, in that it has to do with the scope of systems you are communicating with.

The other axis of interoperability has to do with how truly interoperable you are, and there are different levels there as well. One level is the interoperability you get with the HL7 version 2 [standard], where you have a structure and people know how to send messages between systems. And there is a lot of negotiation that happens when you set up an HL7 version 2 interface to say what terminology you are using, and if you send something as two fields or one field. There is a lot that goes on there and that’s helped quite a bit when you talk about HL7 FHIR [Fast Healthcare Interoperability Resources]—it has a more defined structure and has more things specified about terminology use.

And then you can get an even better of interoperability if you are using the Argonaut [Project] profiles. But even at that Argonaut profile level, you aren’t plug-and-play interoperable. There still is ambiguity in the Argonaut definitions that lead to different implementations by different companies and organizations.

The highest level is what I would call “plug-and-play” where this no bilateral negotiation around terminology or anything like that. The standard is explicit enough so that it could be tested for conformance and you can say whether a given system is conformant or not, and the data can be used in the way it was intended. We don’t have any plug-and-play interoperability to speak of right now, and that’s what I’m trying to shoot for.

One of three biggest motivators for me is patient safety. There is really good and convincing data that shows we are killing 250,000 people per year due to preventable medical errors. And that won’t be solved by “zero harm” programs, or by “sort of” interoperable systems. In the end, the “sort of” interoperable systems means that a person still has to look at things and make a judgment. And people are not perfect information processors. So you need a situation where the data is explicit enough where I can write rules that prevent the death or improper treatment of patients.

And we are not at that level yet. How urgent is it? I think it’s incredibly urgent and you can make an argument that it’s more important than lots of other things we’re spending money on that would have less of an impact on patient care. I work in this area, so yes, I am biased.

But I’m persuaded that it’s worth an investment, and to get to where I want to get to will not be easy. This won’t be something where you make one $20 million investment and then it’s done; it will take five or 10 years, and you will make incremental progress over that period of time. Think of it like a military campaign or a crusade, because it’s that type of timeframe and scale where you need planning and infrastructure to really accomplish what we want to do in the end—which is save lives, decrease the cost of care, and reduce the burden of clinicians.

Many folks believe that until the business incentives change, stakeholders will not be incentivized to be open with their systems. Do you agree with this and how much incentive exists today?

There isn’t a whole lot of incentive yet. If the patient care and safety issues were sufficient enough incentives, then this would have been solved a long time ago because those incentives have been there. People know and understand that we’re not caring for patients in the best way possible. And it’s the financial and proprietary considerations that keep us from doing that, ultimately.

We have to be careful [with incentives] though, because there are unexpected consequences. Going back to when I was on the HIT Standards Committee, we thought that we were doing useful and good for U.S. healthcare when we set up the meaningful use measures. And while meaningful use solved the EHR adoption issue, what it taught people was how to manage measures but not manage quality.

People became incredibly good when it came to managing the measures to get paid and to meet the qualifications, but I don’t think anyone would assert that those things improved the quality of care in any measurable way. So I think we didn’t meet the goal that we were shooting for—providing better quality care at a lower cost.

The ONC annual conference took place last week, and there seemed to be significant conversations around pending regulations such as possibly making interoperability a requirement to stay in Medicare and prohibiting information blocking. How does all of this land for you?

I welcome the change; it’s a good as thing you move from meaningful use to promoting interoperability. What I don’t know is if these specific [rules] being proposed are going to accomplish what [we want]. We thought we were doing the right things back when we were doing meaningful use.

At a high level, I would agree that it would be wonderful to require interoperability as a requirement for Medicare participation. But it’s undefined. When talking about the dimensions and these things, there has to be an understood and a useful level for the interoperability that’s required. But I haven’t seen the details to know whether what’s being asked for is both achievable and valuable if it were to be achieved. But I do agree with the [overall] direction.

Intermountain is often at the forefront of health and health IT initiatives such as its sponsorship of the Opioid Community Collaborative. How can these learnings be shared so they can improve the digital healthcare ecosystem?

The thing I try to emphasize to people is that if you look at what we are doing, and you take it in aggregate across the country—the things people are applying decision support to—it’s a tiny part of what we could do. And the reason for that is we don’t have interoperability. You can create a good program at Intermountain, or at Kaiser Permanente, or at Mayo Clinic, but the only place it works well is where it was developed. You cannot move it. If you move it, you have to recreate it. Until you have interoperability, I can’t write a rule that works on top of a Cerner system and also on an Epic system, or for that matter works on two different Cerner implementations. This cannot happen until you have those platforms supplying APIs so I can hook my decision support up to their system without rewriting all of the logic in a different technology platform.

So we are doing good things, and want to continue to do good things, but wouldn’t it be wonderful if what we did, or what the University of Utah is doing with opioids, can be directly moved and used, in the same way people can buy apps for their iPhones in the app store, or any other platform.

The realization is we might be doing 150 things at Intermountain in terms of decision support applications, but there is an opportunity to do 5,000 things, and we will never get to those 5,000 things unless we get to an interoperable platform so that when knowledge is created it can be shared. That’s my real emphasis behind interoperability.

Sponsored Recommendations

A Cyber Shield for Healthcare: Exploring HHS's $1.3 Billion Security Initiative

Unlock the Future of Healthcare Cybersecurity with Erik Decker, Co-Chair of the HHS 405(d) workgroup! Don't miss this opportunity to gain invaluable knowledge from a seasoned ...

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...

Spotlight on Artificial Intelligence

Unlock the potential of AI in our latest series. Discover how AI is revolutionizing clinical decision support, improving workflow efficiency, and transforming medical documentation...

Beyond the VPN: Zero Trust Access for a Healthcare Hybrid Work Environment

This whitepaper explores how a cloud-enabled zero trust architecture ensures secure, least privileged access to applications, meeting regulatory requirements and enhancing user...