AI Vendors Weigh in on Governance and Regulatory Issues
Key Highlights
- Healthcare organizations face significant resource and infrastructure challenges in monitoring AI tools at scale.
- Recent guidance from the Joint Commission and CHAI emphasizes AI governance, safety event reporting, and vendor management to promote responsible AI use.
- Industry leaders advocate for building scalable, standardized monitoring systems and express skepticism about regulatory carve-outs, favoring support and tooling to build trust.
Health systems and startup companies are both having to navigate tricky governance and regulatory waters as they implement new artificial intelligence-based tools. A recent webinar put on by consulting and services firm Manatt Health featured execs from two startups created to help health systems with AI governance. The speakers noted that many health systems may have the expertise to monitor machine learning models, but they don’t yet have the infrastructure and capabilities to do it at scale.
The Manatt Health meeting on policy trends began with Randi Seigel, J.D., a Manatt partner, giving some background about the current state of governance, including recent Joint Commission/CHAI guidance.
After discussing some nascent but stalled attempts at AI legislation in Congress, Seigel described some models that are being developed by associations. For instance, the National Association of Insurance Commissioners has established a model bulletin around payers’ use of AI that's been adopted by a large number of states. “They also recently put out a report that talked about how different payers are engaging with artificial intelligence and how they've established their governance process,” she said. The Federation of State Medical Boards also has put out a statement around responsible and ethical incorporation of artificial intelligence, and this has been adopted in part by at least two state medical boards.
Then Seigel described how the Joint Commission and the Coalition for Health AI released their proposed guidance on adoption for AI best practices in the healthcare sector. “It covers recommendations related to AI policies and governance structure, patient privacy and transparency, data security, data use protections, ongoing quality monitoring, reporting of safety events, risk and bias assessment and education and training,” she said. “And the guidance sets forth provisions that healthcare providers may want to include in their contracts with third-party vendors to comply with privacy and data security standards, as well as give some recommendation for post-surveillance monitoring responsibilities as part of vendor procurement and contracting.”
In addition, the guidance recommends that healthcare organizations should implement a process for voluntary, confidential and blinded reporting of AI safety events to either the Joint Commission or patient safety organizations, Seigel said. It also notes some best practices for AI governance, including how to do risk-based management of third parties and how to assess internally developed and purchased tools.
One of the panelists, Troy Bannister, founder and CEO at Onboard AI, noted that only a small percentage of hospital systems have the resources available to stand something up that is comprehensive, real-time and responsive to the risks that may emerge.
“When the Joint Commission and CHAI published that first guideline, the No. 1 pushback from the hospitals was ‘we cannot stand up monitoring for every AI tool. That is a huge lift for us.’ I think if you marry that with where the industry is with AI, it's predominantly low-risk use cases. The hospitals are not starting with the highest-risk use case they can find. They're starting with chart review, ambient scribe, radiology triage — things that have a human in the loop, that have professionals looking at every output and providing feedback on every output,” Bannister said. “I think over the next five to 10 years, we're going to see these use cases crawl up that risk curve as we find more business value and clinical outcome improvements and we build more trust around AI performing better than humans. But we're just not there yet.”
Bannister’s Onboard AI describes itself as building the infrastructure that lets healthcare organizations and AI developers meet in the middle with structured assessments, private validation, and continuous monitoring.
Noting that CHAI recently announced a partnership with NIST, Bannister said, “we think there's going to be something similar to HITRUST in the next three to five years, where vendors will have the onus to work to get this credential, and they can bring that credential to the hospital at the point of sale, and skip a bunch of the manual work that's being done today.”
Mark Sendak, M.D., M.P.P., is co-founder and CEO at Vega Health, a startup that builds on his experience at the Duke Institute for Health Innovation as well as helping launch and run the National Collaborative Health AI Partnership. “You have the narrative of there's no standard; thus there's a void. Some entity needs to be designated the authoritative voice to define the standard. I would say that folks building and implementing these models have known for quite a number of years how to evaluate and monitor these models,” he said. “We just don't have the scalable infrastructure and capabilities to do it. From the literature, from the research community, we know how to monitor most of these tools. Some of them require manual effort, especially with large language models, and manual adjudication, but my point is it's not the standard that's missing. It's actually the expertise, it's the infrastructure, it's the data systems to be able to do it at scale for every solution that's used in every health system.”
Manatt’s Seigel described how Sen. Ted Cruz has introduced the SANDBOX Act, (an acronym for Strengthening Artificial Intelligence Normalization and Diffusion By Oversight and eXperimentation), which would mandate that the director of the White House, Office of Science and Technology Policy create a regulatory sandbox program that would allow companies who are working on AI products to request a waiver or modification of certain regulatory provisions.
The panelists seemed cool to that idea. Sendak recalled challenges getting school boards to accept the idea of emergency use authorization during COVID. “I think it's really hard to embrace the premise that carve-outs of regulations somehow give you an opportunity to build trust with an innovation,” he said. “Our approach with Vega Health is let's just give people the support and tooling that they need to feel confident and actually get direct line of sight into how these products are performing in their systems. So I would say — big picture — I’m really skeptical that the idea of carving out federal regulation and regulatory approval somehow promotes innovation. I think it can actually put people in a very defensive posture when considering how to use the tools.”
Sendak said the stance he’s taken most recently in terms of regulation is the development of a CLIA [Clinical Laboratory Improvement Amendments] -like model, where there may be a standard set of practices that are agreed upon, but then the industry relies on a distributed, federated network of organizations to build the internal capabilities to do quality control and quality assurance of AI at scale within all of their organizations. “That’s going to require significant private sector engagement,” he noted.
About the Author

David Raths
David Raths is a Contributing Senior Editor for Healthcare Innovation, focusing on clinical informatics, learning health systems and value-based care transformation. He has been interviewing health system CIOs and CMIOs since 2006.
Follow him on Twitter @DavidRaths
