At HIMSS25, Thinking About AI and Its Impact on Frontline Clinicians
At HIMSS25 taking place at the Venetian Sands Convention Center in Las Vegas, the discussion at the AI Preconference Forum on Monday morning turned to the crucial set of questions on how to engage clinicians and others, in the adoption of artificial intelligence (AI) in patient care organizations.
The first panel of the morning, entitled “Navigating AI Integration Through Change Management and Workforce Inclusion,” was moderated by Attila Hertelendy, Ph.D., of Florida International University. He was joined by Spencer Dorn, M.D., M.P.H. MHA, of the University of North Carolina at Chapel Hill; Irene Louh, M.D., an adult intensivist at Baptist Health in Jacksonville, Florida; Mark Sendak, M.D., MPP, of the Duke Institute for Health Innovation in Durham, N.C.; and Scott Hadaway of ServiceNow.
Hertelendy asked Dr. Dorn about his hopes for AI in terms of improving the worklives and the productivity of frontline physicians, nurses, and other clinicians. “That’s one of the great hopes: we have this magical technology; can we apply it in ways that relieve the burden and the drudgery?” Dorn said. “In many ways, I’m optimistic. But we have to be level-headed and realize that some burden might be relieved, and some new burdens might be added as well.”
“AI is so promising for healthcare, for our workforce and teams,” Dr. Lowe said. The core of the healthcare provider is that we want to care for our patients and really improve patient health. Over time, healthcare has made it more difficult because of the structure and function, so any way we can really relieve that burden, is important; there are a lot of opportunities leveraging AI, so this is a really exciting time to be in healthcare and healthcare IT.”
Dr. Sendak emphasized that “I would say that most of the use cases that I have worked on, putting AI into clinical practice, do try to relieve some of the clinical load, for frontline physicians. So one of the first use cases for us was identifying gaps in care for patients with increasing kidney disease and other chronic disease, trying to help the primary care doc in managing care and making sure folks are getting referrals, prescriptions, etc.; as well as identifying emerging sepsis.”
“How do we create strategies to engage our employees, to prevent skepticism and engage with trust?” Hertelendy asked the panelists.
“Frontline workers should be skeptical of AI, not necessarily cynical, but skeptical; we’ve all been promised so many things in the past,” Dorn said. “I don’t think we should expect clinicians to run to this with open arms. Second, AI is kind of a meaningless term at this point, with so many different technologies discussed at the same time, that some baseline education could go a long way. And third, aligning around a common goal. Why are we engaging with these technologies?”
“I feel there are a few different camps” in her health system, Louh opined. “There’s the camp of, I’ve been sold something that sounds great, and some people are idealistic that will solve all the world’s ills; there’s the very skeptical group, who are also burned out on technology, as with the EHR. And I echo Spencer on this: education and awareness is an area where we’ve seen benefit through transparency. We’ve implemented LLMs for draft responses; that’s commonplace now. But really level-setting with our clinicians and team members so they know that this will take work and partnership to work. When we create those partnerships with our physicians, nurses, MAs and staff, to really build those models, that will reap rewards. We didn’t go to medical school to do this, so this requires a lot of learning on everyone’s part. And there’s a lot of technology that doesn’t work, so we do need to be skeptical and figure out what works and doesn’t.”
Responding to a question about the anxiety that many clinicians have right now, Hathaway said, “Scott Hathaway: Clinicians show up with a huge burden on their backs. And now they have to talk to an AI that they may believe is smarter than they are or has access to more information. And it does feel like a black box. And we have to be able to provide transparency” to how AI really works.
“Are you hearing concerns about job loss?” Hertelendy asked. “Let’s take a step back,” Dr. Sendak said. “I am confident—we’re looking at a nine-figure shortfall in our organization. But it’s not going to be, will AI take my job, but instead, will my job be eliminated because AI will be used when people are eliminated? I’m married to a front-line primary care physician. We’re in a dire shortage of behavioral healthcare services,” among others, he noted.
“There’s another piece, and it gets minimized,” Louh said. “We have a nursing shortfall in this country; we have a physician and a provider shortfall in this country. And in certain ways, we don’t have a choice. It is real: people are worried about losing their jobs. And change is hard for people. And can we think about AI in a way, to really solve some of these problems? At the end of the day, we’re all human, and we need the investment and the architecture to solve this.”
“I think less about replacing healthcare workers, though there is a risk for certain highly repetitive tasks that machines can approximate; but it’s more likely that we’ll all continue to work, but the nature of our work will change.,” Dorn noted. And he went on to say that “One of my favorite studies from JAMA last year found that models can outperform physicians, but it turns out that most physicians were using the large language models like search engines, but they’re not actually search engines. So we need to help people understand that this is a different class of technologies; having some basic literacy education would help.”
“And how do you create space for your team members who are burdened, and where does that fit in our organization?” Louh said. “About two months ago, we retrained all our nurses on our EHR, on which we had been live for about two-and-a-half years. We wanted to help them level up how they use the EHR. It required space, time, and money. It was very useful and helpful, but required c-suite-level engagement. But it decreased documentation time for our nursing staff and made them happier; they understood the tools better. And we need to do that with regard to AI. Just take the basic predictive model for sepsis: what’s it for? What’s it not for? How do you use it, and critically think about what you’re seeing? Those kinds of concepts are really important.”
“How can we build solutions for our frontline clinicians? And it’s unrealistic to me to think that every primary care doctor should be doing independent due diligence on algorithms. There’s a behavioral health crisis among our youth, and so that’s not something that frontline clinicians should be doing. I’ve seen a positive ripple effect, where we’ll create an algorithm for a particular use case, and then other groups will adopt similar strategies. And that is classic innovation strategy. And at a national level, we’re seeing a massive digital divide, with maybe a few dozen organizations—Duke, UNC, New York Presbyterian—we’re in a network and are advanced. But how do we help safety-net hospitals, critical-access hospitals, federally qualified health centers, how do we help them to adopt technology? And how do we help leaders make decisions to help their frontline caregivers?” Helping patient care organizations across the U.S. healthcare system to be able to effectively adopt AI will be crucial, he emphasized.