Why Did Two Very Similar EHR Satisfaction Surveys Produce Two Very Different Results?

Aug. 26, 2015
I became quite intrigued when I read two surveys on electronic health records (EHRs) over the past few weeks that garnered very different results.

I became quite intrigued when I read two surveys on electronic health records (EHRs) over the past few weeks that garnered very different results.

First, a 51-page report from online resource organization AmericanEHR and the American Medical Association (AMA) found that compared to five years ago, more physicians are reporting being dissatisfied or very dissatisfied with their EHR system. The survey of 940 providers that included 155 questions found that close to, or more than half of all respondents, reported a negative impact in response to questions about how their EHR system improved costs, efficiency or productivity. In a similar survey conducted by AmericanEHR five years ago, the majority of respondents said that overall they were satisfied or very satisfied with their EHR system; with 39 percent being satisfied and 22 percent being very satisfied. In the current survey, the majority of respondents indicated that overall they were dissatisfied with their EHR system; with only 22 percent indicating they were satisfied and 12 percent indicating they were very satisfied.

To the contrary, Black Book Market Research’s annual EHR satisfaction survey found that 71 percent of all large practice clinicians stated their optimization expectations of top ranked Black Book EHR vendors were being met or exceeded according to physician and clinician experience. Eighty-two percent of administrative and support staff declared upgraded operational and financial developments, as well. Comparatively, in 2013, 92 percent of multispecialty groups using electronic records were “very dissatisfied” with the ability of their systems to improve clinical workload, documentation and user functionalities. For this year’s Black Book survey, more than 27,000 EHR users participated in the 2015 polls of client experience in a sweeping five month study.

Now it’s important to remember a few things when it comes to surveys of this ilk. For one, surveys in general often generate varying results, even if on the same topic. In healthcare specifically, where physician mandates are plentiful, there are there are bound to be diverse opinions on a single matter. Take ICD-10 for instance—some surveys rate the level of physician readiness as “optimistic,”  while others find that doctors are quite “uncertain” about the transition. Depending on the physician, the timing, and the resources and size of the practice, you might get all kinds of different answers. Such is the risk you run when you report on surveys. You also have to ask yourself, Are these respondents self-selected in any way? Is there any pre-conceived biasness we should know about? Sometimes, it’s just impossible to get down to the nitty-gritty unless you conducted the survey yourself.

Looking at these two surveys and their results, you can come up with a few reasons why they might have produced different sentiments. For one, scanning over the methodology in the AmericanEHR survey, the press release reads, “AmericanEHR Partners uses a 155-question online survey to collect data on clinicians’ (Physicians, NPs, and PAs) use and satisfaction with EHRs and health information technology. The survey uses skip logic to present individuals with questions that are most relevant to them, and takes an average of 20 minutes to complete. Respondents are allowed to skip questions or indicate that they do not know the answer to the question. The core survey has been in use since 2011.”

It goes on to say, “Surveys conducted by AmericanEHR Partners in conjunction with the American Medical Association, American College of Physicians and American Academy of Family Physicians… Each society was allowed to select the population of their members to receive the survey. Information about EHR use by individual society members was not available. Therefore, the survey went to both users and non-users of EHRs. All respondents completed the same survey.”

What sticks out to me are the sentences, “Each society was allowed to select the population of their members to receive the survey. Therefore, the survey went to both users and non-users of EHRs.” Thinking about what I said earlier regarding self-selected surveys, I immediately question the validity of this survey. For one, AMA President, Steven J. Stack, M.D., has been quite the outspoken one when it comes to EHRs. Just recently, during a town hall meeting in Atlanta on July 20th hosted by AMA, Dr. Stack said, “This is not for you to hear me talking to you, but for me to hear you talking to me... Has workflow in your office changed?"  What’s more, in January, a similar AAFP report came out attesting that doctors are being forced to switch EHRs and there is widespread dissatisfaction among physicians who have switched.

Now I am not implying that the doctors surveyed in these physician-based associations are liars or are misrepresenting themselves in any way. That being said, if the given society is allowed to select who will be answering these questions, knowing that the group isn’t fond of EHRs in their current form, it would certainly behoove them to find those clinicians who agree with their general sentiment, would it not? That’s why you have to take surveys like this one with a grain of salt—you just don’t know who or what agenda is behind them.

Meanwhile, the Black Book survey polled a larger sample—27,000 EHR users participated in this one compared to less than 1,000 in the AmericanEHR survey. Additionally, 6,000 study participants that have not yet fully implemented or using enterprise electronic health records provided insight on budgeting, adoption plans, factors driving EHR decisions and vendor awareness. That tells me that while Black Book could have polled those non-EHR users on their satisfaction level, they instead did the right thing and asked the correct questions to the appropriate people. Comparatively, the AmericanEHR survey asked everyone the same thing, regardless of the level they were at in terms of EHR implementation.

Furthermore, it should be noted that the Black Book survey looked at larger physician practices, while the AmericanEHR/AMA survey looked at smaller practices, including solo doctor offices. In a recent article on this very topic in Medscape, Doug Brown, managing partner of Black Book, said, "Larger physician organizations are much more satisfied because of their resources and the offerings of larger EHR firms. Smaller practices bought inexpensive and/or free EHRs for meaningful use incentives with little or no support. That is a recipe which generates vocal, critical users.”

Also in the Medscape story was a statement from AMA defending its report. The statement read that the findings are “representative and consistent with what we have heard anecdotally from the vast majority of America's physicians. The Black Book report also more narrowly focuses on more innovative cloud-based electronic health records. Several of the report's findings directly spotlight the barriers and strong dissatisfaction physicians find when using EHRs.”

These might be fair statements from AMA, but at the end of the day I maintain that it’s easiest to “find” those who are angriest and most vocal. I’m also not attesting that EHRs are perfect, or anything close to it. In fact, in the Black Book survey itself, it found significant decreases in satisfaction by users of several clinic-oriented EHR users that failed in regional connectivity attempts (76 percent), implementation and training (77 percent), and customer support (85 percent).

As with most things in life, EHR satisfaction isn’t a black-and-white issue—there is plenty of gray area that needs to be considered. I’m sure that plenty of small practices have real trouble with implementing technology, and I hope that can be addressed and changed sooner than later. But I also hope that survey conductors do their absolute best in removing as many biases and agendas as possible. After all, the last thing the health IT world needs is more confusion.

As always, feel free to leave comments or follow me on Twitter.

Sponsored Recommendations

Clinical Evaluation: An AI Assistant for Primary Care

The AAFP's clinical evaluation offers a detailed analysis of how an innovative AI solution can help relieve physicians' administrative burden and aid them in improving health ...

From Chaos to Clarity: How AI Is Making Sense of Clinical Documentation

From Chaos to Clarity dives deep into how AI Is making sense of disorganized patient data and turning it into evidence-based diagnosis suggestions that physicians can trust, leading...

Bridging the Health Plan/Provider Gap: Data-Driven Collaboration for a Value-Based Future

Download the findings report to understand the current perspective of provider and health plan leaders’ shift to value-based care—with a focus on the gaps holding them back and...

Exploring the future of healthcare with Advanced Practice Providers

Discover how Advanced Practice Providers are transforming healthcare: boosting efficiency, cutting wait times and enhancing patient care through strategic integration and digital...