One of the highlighted speakers at HIMSS25, the annual conference of the Healthcare Information & Management Systems Society, was Dennis Chornenky, CEO of the Washington, D.C.-based Domelabs AI, an advisory and technology firm that provides advanced AI governance services and solutions and deploys governance infrastructure for agentic AI. Chornenky, on Monday, March 5, spoke about agentic AI during the AI Preconference Forum.
Following the conference, Healthcare Innovation Editor-in-Chief Mark Hagland spoke recently with Chornenky to drill down a bit more into the subject of agentic AI. Below are excerpts from their conversation.
Many people are unclear about what agentic AI is; how would you define agentic AI?
The word “agentic”/“agency” is important to focus on; there’s a lot of confusion about it. Some folks are trying to define agentic AI as being different from AI agents. It’s simply referring to AI methodologies that make use of AI agents; it’s just a broad term; the terms are totally interchangeable.
The emergence of the concept of agentic AI seems somewhat recent. Is that correct?
Actually, the terms have been around for years; they’ve just become more popular recently, because we’ve entered a time when we can leverage large language models, LLMs, as core components of AI agents. An AI agent leveraging large language models can accomplish many things previously envisioned, but which couldn’t really be successful earlier. It’s really about giving a machine a goal, and giving it the capability to try out different methods and make use of different resources to achieve a goal. If a process is very static, that’s not really an AI agent, it’s more of an AI workflow. With AI agents, there’s a lot more agency and variability in how it might achieve a goal; that’s also why it introduces new risks.
What differentiates it in many ways from generative AI is that rather than simply creating content or ideas, it can take action to achieve specific goals. It can also be much more multimodal and take in different types of data and protocols, whether it be visual, audio, financial data sets, healthcare data sets, so, multimodal input of data sets. And it also can involve multimodality in terms of outputs; it can create more diverse outputs than ChatGPT, which primarily creates text. These are the early steps we’re seeing towards multimodality. For example, most LLMs cannot create PowerPoint presentations themselves. So an AI agent will have more capability to create a PowerPoint deck; or to create an email and send it to someone. And so we’ll have to decide on the scope or latitude of freedom the agents should have, to achieve goals.
How will the introduction of agentic AI change what’s developed on the clinical side?
It’s important to keep in mind that the development of really effective AI agents will be a bit slow; it poses a complex design challenge and challenges around understanding risk. It may be a while before we see a real uptick in adoption. Right now, no major health systems are fully using AI agents, though there have been some pilots. But I think we will start seeing some kinds of packages of capabilities and modalities that will prove to be efficient and useful and will drive value. Some areas will be very ripe for agentic AI; what we’ll need to see there probably the right design choices in terms of which tasks and workflows are best suited to a particular combination of technologies. And when we think of agents, we should be thinking about combinations of technologies. And there may be a combination of technologies needed to create precise tasks and to integrate processes efficiently.
The other dimension involves pairing human interactions with machines. There’s so much variability that humans can handle that will need to be programmed into agents. So initially, the challenge will be effectively pairing an agent with a human being. And the agents will need good memory to capture and standardize certain behaviors that the human wants from them. To write certain emails when it’s appropriate, or not do so. What I don’t want to do is to have to train the agent every single day; if I have to keep telling it the same nuances, that’s like having an intern who never learns the job. So the memory component is very important; the other component is precision.
The fundamental flaw of LLMs is that they’re imprecise. It’s basically a heuristic model: what might a human say next? You can ask the same question five times and get, sometimes, five very different answers. And so, when asking how many “r’s” are there in the word “strawberry”? Many LLM’s will actually frequently get the answer wrong. It’s not a precise mathematical model; it involves being able to predict things. And so in mission-critical areas such as care delivery, precision is key, so we need additional development, and for AI agents, that component will be very important. You need that reliability and consistency.
And your organization is helping to lead VALID AI, an execution accelerator for generative AI in healthcare.
Yes, we describe VALID AI as a collaborative of 50+ leading health systems and payors, working to advance responsible adoption of AI in healthcare. VALID provides peer collaboration for senior leaders and develops resources, programs, and best practices for AI governance, cybersecurity, and efficient AI adoption.