Interoperability’s Dirty Little Secret: Your Doctor Doesn’t Trust Your Data
by Liz Lewis, CommonWell Health Alliance
More than 80% of doctors don’t trust the healthcare information they can access — and there might be a good reason for it. Let’s consider the doctor who was given 270 pages of patient notes to skim through in the five minutes they have between appointments. Or the doctor in North Carolina who assumed that he had complete records on his new patient only to find that 70% of them were trapped in Georgia. With stories like these, who could blame them?
While it can be tempting to dismiss these as individual experiences, research shows that 97% of information generated by the healthcare sector sits unused. As we look at national interoperability challenges ahead, this remains a relevant sticking point. How do we cast a wide enough net to make sure that all of the data the healthcare system needs is being collected safely, accurately, and completely, while also ensuring that the people accessing this data can quickly surface the relevant information they need without getting overwhelmed and abandoning all efforts due to the firehose of information?
A deluge of data
National interoperability initiatives have gotten a boost in recent years. While interoperability capabilities have been around for decades, additional legislation, funding sources, and state and national initiatives like Chesapeake Regional Information System for our Patients (CRISP) and the Trusted Exchange Framework and Common Agreement (TEFCA) have initiated a landslide of digital transformation efforts, from your local doctor to the largest health systems in the country joining health information exchanges (HIEs) and otherwise modernizing the way they share data.
Data collection, access, and exchange must continue, and the pipes connecting all care settings must multiply to appropriately respond to data sharing needs. However, alongside this critical collection work, we must find ways to ensure that we can effectively tap into these pipes to collect the right information at the right time for the right people.
A muddled definition delivers garbage data-in
Data usability is both a technical and trust issue, but the solution may be one and the same. Meaningful Use was codified more than 15 years ago as part of the HITECH Act, but if you speak to almost any doctor today, you’d hear how frustrating it is to receive a 20-page note from a single patient’s hospital visit. That isn’t usability — that’s covering all bases, leading to overwhelm. What’s worse is that the resulting burnout has been documented for years. Physician burnout has been increasing in the United States, especially in primary care, and the use of electronic health records (EHRs) is a prominent contributor, according to the NIH.
When Clinical Architecture analyzed over a billion lab results for a U.S. congressional report in 2024, the findings were sobering:
● Only 68% had valid LOINC codes, or the international standard for identifying health measurements, observations, and documents.
● Only 66% of quantitative lab units were Unified Code for Units of Measure (UCUM)-compliant, the standard for representing units of measurement in a precise and unambiguous way, especially in electronic communication.
● Just 2% of qualitative results were coded to SNOMED CT, or Systematized Nomenclature of Medicine — Clinical Terms, a standardized, multilingual vocabulary of clinical terms used by healthcare professionals and EHR systems to facilitate the accurate and consistent exchange of health information.
When the information exchanged doesn’t use the same language, care providers are unable to trust that what they’re relying on is complete and accurate. Further, it can lead to inappropriate treatment, missed diagnoses, and insurance denials. That’s not just a technical problem — it’s a clinical risk.
Here’s how we can use this information to make it better.
Real data usability to prevent garbage data-out
Most physicians worry that they won’t be able to find their patients in their systems. What’s increasingly common is that there are too many patient records, and duplicates may be hard to identify without additional knowledge or information. This shifts the way that we must solve for real data usability from a stance of casting a wider net to also thoughtfully validating and surfacing relevant information from this data pool.
For data to truly improve clinical care, usability efforts must bring together all of the information available on a single patient and combat the garbage-in, garbage-out dilemma through automatic data cleansing. Understanding the lack of valid data alignment and compliance gives us steps to solve for these specific issues, including understanding the breadth and depth of various data standards and how to homogenize them.
Good data management ingests all of the various data standards and cleans, normalizes, de-dupes, and validates the quality of the data. Specific functionality like terminology management can take patient records from confusing to clear.
When combining better data with smart user experiences in EHRs, the basic essentials of what clinicians need to do their best work can be surfaced, while also helping address the overwhelming administrative challenges physicians face.
The path from massive amounts of patient data to data refinement to data usability is complex but necessary. Let’s not let HITECH and TEFCA™ alone define a usable national health data sharing network.
For trust to be restored, we as an interoperability community must remain focused on elevating data clarity for better usability, earning doctors’ respect instead of their enduring ire.
Liz Lewis is director of product at CommonWell Health Alliance. Stephanie Broderick is SVP, provider solutions at Clinical Architecture.