Yasir Tarabichi, M.D., is chief health AI officer at MetroHealth, the Cleveland-based public health system. He is also CMIO at the Cleveland-based Ovatient, which provides care coordination for primary care, urgent care, and behavioral care using a unified tech platform. Dr. Tarabichi sat down with Healthcare Innovation Editor-in-Chief Mark Hagland during ViVE25, taking place this week at the Music City Center in Nashville, to discuss the real state of artificial intelligence adoption in patient care organizations in this moment. Below are excerpts from that interview.
After a long period of hype and high expectations, where are the leaders of patient care organizations right now in terms of really moving forward on AI development?
It depends on where you are as an organization on the innovation curve. The organizations that jumped ahead spent a lot of time, energy, and money, figuring it out, and probably helped everyone save some time that way. My role at MetroHealth is to identify opportunities and guide the organization strategically so we don’t squander resources, and so that we’re investing, buying, resources, that work for us. So what is the actual value proposition or ROI [return on investment]? Sometimes, the ROI is that makes your clinicians better-adjusted. And that’s great, but the organization might say, that’s nice, can you see more patients?
And within the reimbursement environment, we have to think carefully in terms of ROI. I cochair the AI advisory committee at MetroHealth, with a business partner, as a dyad. We cross-pollinate. So I talk about risk from a clinical perspective; he reminds me about the operational issues, this could hurt us financially, that could hurt us strategically. So the risks are parallel to the clinical, but different. So we want to see what’s out there and figure out what we’re solving for Can we be a little bit better informed rather than trying something de novo. We need to pick solutions that cross-pollinate all those goals.
What are a few of the initiatives you’re working on right now?
We’ve done a bunch of predictive analytics in the clinical space. We’ve built models and evaluated them. We want to do so in an equitable fashion. Here’s one example: a common issue is access to care in clinics, and a common issue is that systems overbook patients, which is honestly a terrible idea. So in a zero-sum system, those already behind are most poised to lose. As soon as you say, this individual is at a high risk of not showing up—and they might be a person of color, disadvantaged, etc.—and then what do they get if they show up? They have a terrible patient experience: they’re upset, the clinician is upset.
I would posit that double-booking patients for clinic appointments is a very bad solution to a problem, because it exacerbates disparities. We’re a community-based safety-net system, and we believe that if you make an appointment, that appointment is yours. And we have all these phone calls, SMSs, patient portal messages going to patients, but some patients simply do not respond. So what can we do? Call them. It turns out that there’s a segment of the population, mostly Black, that has a high rate of no-shows. So if we double-book appointments, it is that group of patients that will tend to be disadvantaged. But they will pick up the phone if we call them.
As a result, we’ve implemented a solution with a standardized pathway, paired with phone calls. And in doing so, we’ve reduced the no-show rate in the African-American community by 15 percent.
In other words, you paired AI-facilitated data analysis with a relatively low-tech action—meaning, telephone calls.
Yes, that’s correct: the question is, how does the technology work in the real world, with our patients on the ground? And we can predict anything, but what does that mean? It doesn’t tell me what I need to do. The solution is not the technology. At this point in time, we’re done being enamored and excited by the tech; we have to make it work. It’s a high-tech, high-touch approach.
How would you characterize this moment in terms of generative AI adoption and development?
I’m probably less excited about where the large language models have landed today; they’ve stagnated. What I can say is that what generative AI is best for is ambient listening, and the other, augmented information retrieval from a busy, terrible EHR [electronic health record]. An example on the Ovatient side is how we’ve handled the use of antibiotics. The classic situation is when a patient comes to a physician with a potential urinary tract infection, and the physician orders a prescription for an antibiotic, but says to the patient, “OK, I’ve ordered a prescription for an antibiotic, but wait until your UTI test proves positive to take the antibiotic, OK? Well, what does the patient do? They automatically start taking the antibiotic. But with generative AI, as a physician, I can screen the interaction, based on predictive analytics, that can predict whether a patient’s symptoms match UTI, in advance of testing.
What’s going to happen in the next few years, particularly around generative AI?
The technology is going to get cheaper and more accessible, and the next step will be to ask why we’re using it. So I think that if you’ve swept up all the information in the EHR and understood the best practices and protocols, now, given the knowledge base of medicine, which was hard to code into protocols, there’s an opportunity leveraging LLMs to move forward in that area. And the generative AI players will knock on that door. And if you can install agentic AI into a patient portal, the portal into a portal with agentic AI, and it can book an appointment with you, it creates an arms race with EHR vendors trying to make for a better experience.
An agent could reformat and make things faster for you; it will curate the experience to my liking I’m looking forward to that and to patients being more empowered. And I also think a lot about access. Access in navigating healthcare is tough, and it sucks. And unless a patient has a full-time coordinator waiting tat their side helping them with every step—that coordination is another opportunity. But agentic AI will have to understand the system. Still, we need to fix the broken healthcare delivery system, too.