UCSF to Develop AI Monitoring Platform for Clinical Care

May 29, 2024
Platform aims to bridge the gap between the rapid evolution of AI technologies and the need for robust, ongoing assessment of their efficacy, safety, and equity

The UCSF Division of Clinical Informatics and Digital Transformation (DoC-IT) and UCSF Health plan to develop a real-time, continuous, and automated artificial intelligence (AI) monitoring platform for clinical care. 

Funded with a $5 million gift from Ken and Kathy Hao, the Impact Monitoring Platform for AI in Clinical Care (IMPACC) aims to bridge the gap between the rapid evolution of AI technologies used by clinicians and the essential need for robust, ongoing assessment of their efficacy, safety, and equity. 

Julia Adler-Milstein, Ph.D., chief of the UCSF Division of Clinical Informatics and Digital Transformation (DoC-IT), and Sara Murray, M.D., M.A.S., chief health AI officer at UCSF Health, will lead the collaboration.

While assessments are conducted to determine the suitability of new AI technologies for safe integration into clinical environments before deployment, once they are deployed, health systems need a way to promptly identify any issues in their real-world performance.

IMPACC will seek to fill this need by shifting from planned, periodic, manual monitoring of a focused set of measures to real-time, continuous, automated, and longitudinal monitoring across a broad measure set with specified criteria for escalation to human review and intervention.

“By building IMPACC, we will take a major leap forward in how we analyze AI’s performance in healthcare,” said Murray in a statement. “As we deploy new AI technologies, this novel, scalable platform will provide our health system with direct and actionable insights into ongoing performance, ensuring not only the effectiveness of these new tools but also safety across the system and benefit for patients.”

In addition to monitoring both performance and impact on targeted outcomes over time – such as whether an AI tool improves clinical outcomes for patients – IMPACC will be used to inform healthcare leaders on decisions about scaling, refining, or turning off a tool for their systems. Specifically, it will report on whether a tool is achieving its intended results or if it requires improvement, as well as flag if a tool is potentially dangerous or risks worsening health disparities, prompting immediate action when necessary. 

After development and testing, IMPACC will be piloted at UCSF Health on an initial set of current AI tools. This effort will be in collaboration with the UCSF Health AI Oversight Committee, which makes recommendations about the safety and efficacy tools and whether they should be deployed more broadly across the health system. The team also will explore building a dashboard to allow patients to track when AI has been used in their care.

The philanthropic gift “comes at a critical juncture as the healthcare industry more broadly integrates AI into clinical practice,” Adler-Milstein said in a statement. “Through IMPACC and this collaborative effort, we are poised to improve patient care at UCSF while advancing the science of how to assess AI tools in real-world use.”

Sponsored Recommendations

Six Cloud Strategies to Combat Healthcare's Workforce Crisis

The healthcare workforce shortage is a complex challenge, but cloud communications offer powerful solutions to address it. These technologies go beyond filling gaps—they are transformin...

Transforming Healthcare with AI Powered Solutions

AI-powered solutions are revolutionizing healthcare by enhancing diagnostics, patient monitoring, and operational efficiency - learn how to integrate these innovations into your...

Enhancing Healthcare Through Strategic IT and AI Innovations

Learn how strategic IT and AI innovations are transforming healthcare - join Tomas Gregorio as he explores practical applications that enhance clinical decision-making, optimize...

The Intersection of Healthcare Compliance and Security in the Age of Deepfakes

As healthcare regulations struggle to keep up with rapid advancements in AI-driven threats like deepfakes, the security gaps have never been more concerning.