UCSF, Partners Building Platform to Accelerate Development of Regulated AI Solutions
UC San Francisco’s Center for Digital Health Innovation (CDHI) is partnering with several companies to establish what it calls a confidential computing platform with privacy-preserving analytics to accelerate the development and validation of clinical algorithms.
UCSF is collaborating with vendors Fortanix, Intel and Microsoft Azure on a platform that will provide a “zero-trust” environment to protect both the intellectual property of an algorithm and the privacy of healthcare data, while CDHI’s proprietary BeeKeeperAI will provide the workflows to enable more efficient data access, transformation, and orchestration across multiple data providers.
In a statement, Michael Blum, M.D., associate vice chancellor for informatics, executive director of CDHI and professor of medicine at UCSF, explained the rationale behind the collaboration. Gaining regulatory approval for clinical artificial intelligence (AI) algorithms requires highly diverse and detailed clinical data to develop, optimize, and validate unbiased algorithm models. Algorithms that are used in the context of delivering healthcare must be capable of consistently performing across diverse patient populations, socioeconomic groups, geographic locations, and be equipment agnostic. Few research groups, or even large healthcare organizations, have access to enough high-quality data to accomplish these goals.
“While we have been very successful in creating clinical-grade AI algorithms that can safely operate at the point of care, such as immediately identifying life-threatening conditions on X-rays, the work was time-consuming and expensive,” Blum said. “Much of the cost and expense was driven by the data acquisition, preparation, and annotation activities. With this new technology, we expect to markedly reduce the time and cost, while also addressing data security concerns.”
The organizations will leverage the confidential computing capabilities of Fortanix Confidential Computing Enclave Manager, Intel’s Software Guard Extensions (SGX) hardware-based security capabilities, Microsoft Azure’s confidential computing infrastructure, and UCSF’s BeeKeeperAI privacy preserving analytics to calibrate a proven clinical algorithm against a simulated data set.
A clinical-grade algorithm that rapidly identifies those needing blood transfusion in the Emergency Department following trauma will be used as a reference standard to compare validation results. They will also test whether the model or the data were vulnerable to intrusion at any point. Future phases will utilize HIPAA-protected data within the context of a federated environment, enabling algorithm developers and researchers to conduct multi-site validations. The ultimate aim, in addition to validation, is to support multi-site clinical trials that will accelerate the development of regulated AI solutions.