Health Systems Join Microsoft-Led Consortium on Responsible AI

March 13, 2024
Trustworthy & Responsible AI Network (TRAIN) will share best practices related to the safety, reliability and monitoring of AI algorithms, and the skill sets required to manage AI responsibly

Microsoft has pulled together a consortium of health systems called the Trustworthy & Responsible AI Network (TRAIN), which aims to operationalize responsible AI principles to improve the quality, safety and trustworthiness of AI in health. 

Members of the network include AdventHealth, Advocate Health, Boston Children’s Hospital, Cleveland Clinic, Duke Health, Johns Hopkins Medicine, Mass General Brigham, MedStar Health, Mercy, Mount Sinai Health System, Northwestern Medicine, Providence, Sharp HealthCare, University of Texas Southwestern Medical Center, University of Wisconsin School of Medicine and Public Health, Vanderbilt University Medical Center, and Microsoft as the technology enabling partner. 

Additionally, the network is collaborating with OCHIN, which serves a national network of community health organizations with solutions, expertise, clinical insights and tailored technologies, and TruBridge, a partner and conduit to community healthcare, to help ensure that every organization, regardless of resources, has access to TRAIN’s benefits.

The work of TRAIN appears to be complementary to that of the Coalition for Health AI (CHAI), which includes representatives from over 1,300 member organizations including hospital systems, tech companies, government agencies and advocacy groups. It aspires to contribute to best practices with the testing, deployment, and evaluation of AI systems. Many of the leaders of CHAI are also involved in TRAIN. 

“As a co-founder and board member of the Coalition for Health AI, I am excited to see health systems coming together to operationalize CHAI’s principles for Responsible and Trustworthy AI,” said Nigam Shah, MBBS, Ph.D., chief data scientist of Stanford Healthcare, in a statement.

“I am excited to partner with my colleagues from our diverse group of health systems and Microsoft in the development and implementation of technologies and capabilities that make health AI more trustworthy,” said Michael Pencina, Ph.D., chief data scientist for Duke Health and co-founder and board member for CHAI, in a statement. “We look forward to leveraging the Coalition for Health AI’s best practice guidelines and guardrails to build practical tools that make responsible AI a reality among healthcare delivery organizations in service to all our patients.”

Through collaboration, TRAIN said that members will help improve the quality and trustworthiness of AI by:

• Sharing best practices related to the use of AI in healthcare settings, including the safety, reliability and monitoring of AI algorithms, and the skill sets required to manage AI responsibly. Data and AI algorithms will not be shared between member organizations or with third parties.
Enabling registration of AI used for clinical care or clinical operations through a secure online portal.

• Providing tools to enable measurement of outcomes associated with the implementation of AI, including best practices for studying the efficacy and value of AI methods in healthcare settings and leveraging of privacy-preserving environments, with considerations in both pre- and post-deployment settings. Tools that allow analyses to be performed in subpopulations to assess bias will also be provided.

• Facilitating the development of a federated national AI outcomes registry for organizations to share among themselves. The registry will capture real-world outcomes related to efficacy, safety and optimization of AI algorithms.

In a statement, Vanderbilt University Medical Center’s Peter Embí, M.D., noted that even the best healthcare today still suffers from many challenges that AI-driven solutions can substantially improve.

“However, just as we wouldn’t think of treating patients with a new drug or device without ensuring and monitoring their efficacy and safety, we must test and monitor AI-derived models and algorithms before and after they are deployed across diverse healthcare settings and populations, to help minimize and prevent unintended harms,” added Embí, professor and chair of the Department of Biomedical Informatics and senior vice president for research and innovation at VUMC. “It is imperative that we work together and share tools and capabilities that enable systematic AI evaluation, surveillance and algorithmvigilance for the safe, effective and equitable use of AI in healthcare. TRAIN is a major step toward that goal.”

 

Sponsored Recommendations

Care Access Made Easy: A Guide to Digital Self-Service for MEDITECH Hospitals

Today’s consumers expect access to digital self-service capabilities at multiple points during their journey to accessing care. While oftentimes organizations view digital transformatio...

Going Beyond the Smart Room: Empowering Nursing & Clinical Staff with Ambient Technology, Observation, and Documentation

Discover how ambient AI technology is revolutionizing nursing workflows and empowering clinical staff at scale. Learn about how Orlando Health implemented innovative strategies...

Enabling efficiencies in patient care and healthcare operations

Labor shortages. Burnout. Gaps in access to care. The healthcare industry has rising patient, caregiver and stakeholder expectations around customer experiences, increasing the...

Findings on the Healthcare Industry’s Lag to Adopt Technologies to Improve Data Management and Patient Care

Join us for this April 30th webinar to learn about 2024's State of the Market Report: New Challenges in Health Data Management.