The Real AI Risk Isn’t the Tech: It’s That We Don’t Know What Problem We’re Solving
Recently, a cardiologist approached our IT team with what seemed like a simple and even forward-looking request: connect a promising new AI tool that could help him interpret echocardiograms faster. He had the budget and motivation to make the most of it, but he didn’t see why anyone else in the organization needed to weigh in. After all, it “only affected him.”
That interaction—one of many I’ve had in my past roles as a chief medical information officer—captures the double-edged sword of healthcare’s accelerating relationship with AI. As an industry, we’ve embraced electronic health records, data exchange, telemedicine, and now artificial intelligence with the hope of fixing our system’s chronic inefficiencies. And now we’re pinning our hopes on AI. But in the rush to adopt these tools, something essential is being overlooked. It isn’t the risk of data breaches or a loss of the human touch, although these are also significant concerns. AI, for all its potential and already realized applications in healthcare, is not being used to solve problems in a focused or deliberate way.
In short, the real risk of AI in healthcare isn’t rogue algorithms; it’s the vagueness with which we deploy them.
The lack of clarity shows up in the form of point solutions that promise quick wins but often lack strategic direction: an ambient scribe that listens in and transcribes patient visits, an AI that flags abnormalities in diagnostic images, or a chatbot that drafts responses to patient messages. These tools often solve narrow problems and claim to save clinicians' time. Many of them do! However, the ease with which they are procured, sometimes by individual departments or even individual clinicians, means they can sidestep thoughtful governance.
I have seen it firsthand. Many AI tools are designed to streamline specific parts of the healthcare workflow. Yet behind the scenes, they might have downstream operational effects, draw on sensitive patient data, or introduce potential security or compliance risks. It could be solving a problem no one agreed was an issue, while introducing many more. These aren’t just technical oversights. They are symptoms of a deeper strategic issue: poor governance.
AI governance is typically framed as a post-purchase activity: validation, bias monitoring, or risk mitigation. Of course, these are all important. But governance should begin much earlier and with clear-eyed intentionality about why the tool is needed in the first place. What clinical problem are we trying to solve? Who is the beneficiary? How will we measure success? What is the AI vendor doing with our patients’ data?
I often return to a principle I used while designing digital workflows: make it easy to do the right thing. But before you can make the right thing easy, someone has to define what the right thing is. That’s the hard part.
Take the often-cited example of an insurance company that used AI to determine which patients should receive additional home care services after hospital discharge. The algorithm used cost as a proxy for need. On paper, that seemed logical: patients who tended to cost more post-discharge might logically be those at a higher risk for readmission and therefore benefit from more home care services instead. When this algorithm was put into practice, the reality was that lower-income patients were offered services less often, not because they needed them less, but because they historically accessed less follow-up care due to systemic barriers. The model was biased not in its data, but in its assumptions about what problem it was solving.
Without clearly articulating our goals, we risk automating—and sometimes amplifying—inequities and inefficiencies under the guise of innovation. So, how should health systems proceed?
First, ask better questions before buying AI tools. Is there a specific problem you are trying to solve? Do you have the right data to support this use case? Can it be integrated meaningfully into existing clinical workflows? If the answer is no to any of the above, it doesn’t matter how advanced the technology is. It will become another dashboard no one checks, another burden on already overstretched clinicians.
Second, get serious about putting governance on the ground. That means establishing cross-functional AI committees or advisory groups that are not limited to tech experts. They should include clinicians, data scientists, ethicists, and operational leaders. These groups should meet regularly, define standards, review tools before implementation, and revisit them after six or twelve months to ask: is this tool still doing what we thought it would? Is it still solving a clearly defined problem?
Finally, be honest about the limits of AI and get real about who remains accountable. Clinicians are still and must always be the decision makers. AI can assist, draft, and suggest, but it doesn’t replace human judgment. That’s not a shortcoming of AI; it’s a design feature of safe healthcare.
If a tool gets it right 99 percent of the time, a doctor still must review it 100 percent of the time. Why? A known 1 percent error rate is unacceptable when dealing with people’s lives, and we do not know which 1 percent will be the exception. But if reviewing and correcting that work takes more time than starting from scratch, then the tool isn’t solving a problem – it’s creating one.
AI’s potential in healthcare is real and vast, yet so are the risks of uncritical, unfocused adoption. If we rush to deploy these tools without pausing to ask what problems we’re solving and for whom we are solving them, we may find ourselves with a system that is indeed more automated, but no more intelligent.
About the Author

Craig Joseph, MD
Craig Joseph, MD, FAAP, FAMIA, Chief Medical Officer at Nordic Consulting, is a board-certified pediatrician and clinical informatician with over 30 years of experience in healthcare and IT. He previously worked at Epic, contributing to EHR development and implementation, and later served as CMIO at several major health systems. At Nordic since 2020, he champions human-centered design to enhance clinician and patient outcomes. In 2023, he co-authored “Designing for Health: The Human-Centered Approach.” He is a Fellow of both the American Academy of Pediatrics and the American Medical Informatics Association and remains actively board-certified in both specialties.
