AI Now? Really?

March 15, 2017
Once again, I am failing Fitzgerald’s test of first-rate intelligence. It happens frequently. This week the cause is artificial intelligence (AI) for healthcare. I blame HIMSS 2017.

Once again, I am failing Fitzgerald’s test of first-rate intelligence.  It happens frequently. This week the cause is artificial intelligence (AI) for healthcare. I blame HIMSS 2017.

HIMSS always has three types of topics: Perennial Favorites, Freak-outs and the Next Big Thing. Perennial favorites are the usual suspects like population health, interoperability, analytics, telehealth and rev-cycle optimization. Freak-outs are the annual “big scary thing we all need to freak out about immediately." This year it was cybersecurity and the freak out seems mostly justified and a bit overdue. Lastly, we have the coveted “Next Big Thing” or NBT. NBT is always transformational and will “change everything." It’s bright and shiny and it’s coming “real soon." NBT often involves a black box that most of us can’t really understand and requires a high degree of trust in the domain experts. For 2017, it’s AI for healthcare – specifically the near-term application of AI as an independent actor when it comes to diagnosis and treatment.

The definition of AI is malleable but the premise seems straightforward. Healthcare is a vast, dense and complex subject matter. The human mind, for all its power, has limitations. We get tired and distracted. We rush. We forget or we are ignorant. We can only hold a limited number of facts or ideas in our heads at any moment. We are prejudiced and exercise bad judgement. AI can, in theory mitigate some of these inherit, human limitations. Functioning as a kind of “auxiliary brain,” AI will help us think more clearly and completely. Some contend that eventually the roles will reverse and the humans will become the auxiliary brain or maybe just the “muscle” that delivers AI-directed care. Or perhaps the robots will do that part too. I’ll be at the Tiki Bar if you need a human.

All of this makes sense from a purely academic, theoretical perspective. The limits of human capacity and their impact on healthcare outcomes are real and well documented. There is something very, very appealing about the notion that AI could make us smarter and less prone to error. I can easily hold that single idea in my head and still function. I even think its plausible and close at hand when I see what can be done today with predictive analytics. It’s when I begin to think about the larger context that my head starts to hurt. Three specific issues make me skeptical about AI for healthcare at this juncture: opportunity cost, high-reliability, and zebras.

Healthcare is subject to limited resources so every strategic choice carries an opportunity cost: “if we do this, then we can’t do that.” I worry the opportunity cost of pursuing AI right now is too high. There are so many other important issues and opportunities of higher priority. There is low hanging fruit when it comes to better care, lower cost and higher satisfaction that does not require highly advanced IT or biomedical technology. Perhaps it’s the former family doc in me, but it seems like we have plenty we could do to improve the care of chronic diseases like diabetes and hypertension. It’s also clear we must improve at recognizing and treating acute conditions like sepsis, heart attacks and strokes. And don’t forget better prenatal care, prevention and wellness. Or palliative and end-of-life needs. This is where the vast burden of illness, suffering and costs lie and where we often fall short on best practices and evidence-based care. AI likely has little to offer here of immediate value and can divert resources and attention from these harder (and frankly less sexy) needs. And make no mistake, AI does require significant resources—hardware, software, people and time.

My second big concern relates to high-reliability in healthcare. When my kids roll their eyes and say, “Yeah Dad, I know." My usual response is, “There’s knowing and there’s doing. They are not the same." The hallmark of high-reliability organizations (HROs) is that they both know and do consistently. For example, it’s not enough to “know” how to recognize and treat sepsis. You must “do” by delivering the right care in a consistent and reliable way every time. The goal is zero defects in care. Now I can see where AI might help me recognize sepsis sooner or consider some nuance in the treatment based on unusual circumstances. That’s the “knowing” part. But, I think it’s a stretch to claim it will make the mechanics of the delivery system, the “doing,” more consistent. That has far more to do with workflow, culture and teamwork, equipment, training and a host of other technical and soft-skills issues. Airplanes and nuclear power plants are safe because they take a comprehensive, evidence-based, well-resourced approach to high-reliability. Healthcare needs to do the same. It’s not clear that AI has much to offer here in the near term.

Which brings us to the zebras. There’s an old expression in healthcare, “When you hear hoof beats, think horses not zebras.” It’s a clever way of making the point that “common things occur commonly.” Sure, that patient with new hypertension may have pseudopheochromocytoma, an extremely rare cause of high blood pressure: that’s a zebra. But it’s far more likely to be “the horse”: routine essential hypertension. I expect AI will help us remember to consider the zebras and determine if they are worth evaluating—that’s a good thing. But my understanding is that this is an infrequent problem. Most medical errors occur not because rare diagnoses are missed but because we fail to recognize the commonplace or simply screw up the process of delivering care. AI may help but I suspect it will be on the margins.

So, I am failing Fitzgerald’s test and feeling like a bit of a luddite when it comes to AI for healthcare. It’s not that I am opposed to it or can’t see any benefit. But, having been both perpetrator and victim, I also understand how seductive visions of NBT can be. How lovely to think we could have a “silver bullet” that will cleanly and easily deal with the complex problems of healthcare. But that’s not how it works. Real progress requires that we do the hard work of dealing with the basics first. There’s no technological short cut, only tech-enablement. Except for a small number of well-funded and leading edge organizations, AI should not be strategic priority. Or as one of my colleagues at HIMSS said, “I don’t think we need AI right now. There’s still plenty to be done with normal intelligence.”

Dr. Dave Levin has been a physician executive and entrepreneur for more than 30 years. He is a former Chief Medical Information Officer for the Cleveland Clinic and serves in a variety of leadership and advisory roles for healthcare IT companies, health systems and investors. You can follow him @DaveLevinMD or email [email protected].

Sponsored Recommendations

Elevating Clinical Performance and Financial Outcomes with Virtual Care Management

Transform healthcare delivery with Virtual Care Management (VCM) solutions, enabling proactive, continuous patient engagement to close care gaps, improve outcomes, and boost operational...

Examining AI Adoption + ROI in Healthcare Payments

Maximize healthcare payments with AI - today + tomorrow

Addressing Revenue Leakage in Hospitals

Learn how ReadySet Surgical helps hospitals stop the loss of earned money because of billing inefficiencies, processing and coding of surgical instruments. And helps reduce surgical...

Care Access Made Easy: A Guide to Digital Self Service

Embracing digital transformation in healthcare is crucial, and there is no one-size-fits-all strategy. Consider adopting a crawl, walk, run approach to digital projects, enabling...