AI for Diagnostics Tops ECRI’s Patient Safety Challenge List

Other key concerns include reduced access to rural healthcare and emergency department boarding contributing to worse patient outcomes
March 12, 2026
4 min read

To mark Patient Safety Awareness Week, which runs March 8-14, ECRI and the Institute for Safe Medication Practices (ISMP) published a report that identifies the 10 most critical patient safety challenges anticipated to impact the healthcare industry in 2026, with “Navigating the AI Diagnostic Dilemma” topping the list.

The nonprofit ECRI, one of the largest U.S.-based patient safety organizations, said that the Top 10 Patient Safety Concerns list is informed by insights from senior executives from across the healthcare landscape—including integrated health systems, children’s hospitals, rural community health centers, and national associations.

In choosing AI used for diagnostics as its top safety concern, ECRI notes that AI is still an evolving technology that raises issues related to reliability, transparency, privacy, liability, and ethics, and “users should not treat it as a replacement for clinical expertise. Placing too much trust in an AI model to diagnose patients without factoring in clinician expertise can lead to misdiagnosis— the very problem AI was intended to solve.”

Despite its potential to improve diagnostic accuracy by automating data retrieval, decreasing cognitive load, reducing cognitive biases, and providing clinicians with information to help guide their decisions, ECRI notes that in some circumstances AI can contribute to diagnostic errors. The report gave some examples:
• AI models can perpetuate biases present in the data used to train them. Such biases can result in incorrect diagnoses and may exacerbate healthcare disparities.
• A lack of transparency related to the data used to train the AI model and the development and testing of underlying algorithms can result in diagnoses based on outdated, insufficient, or incorrect information.
• Issues with an AI system’s operation and performance can result in hallucinations (i.e., incorrect, nonsensical, or nonexistent outputs) or system brittleness (i.e., AI’s inability to consider situations that fall outside of its training data), both of which can contribute to misdiagnosis. This is especially dangerous because AI systems are often trained to
give answers to every question—and users may not realize the answers are wrong.
• Research has shown that over time, over-reliance on AI can erode people’s critical thinking skills. This has raised concerns that clinicians who regularly rely on AI to help diagnose patients will lose valuable diagnostic skills, and that clinicians in training may fail to develop these skills entirely.

The report recommends that health systems establish AI usage policies, guidelines, and procedures for staff that outline clear roles and responsibilities for the governance, implementation, oversight, documentation, and monitoring of AI technologies.

ECRI also suggests that health systems ensure that staff are trained on the proper use of AI systems, particularly those that assist in diagnosis, and inform clinicians of the systems’ capabilities and limitations. They should require staff to document instances in which AI was used for diagnostic purposes and how it affected the clinical diagnostic process.

The report recommends disclosing the use of AI to patients and obtain informed consent before using generative AI in patient diagnosis or uploading patient information to an AI system. Health systems should include opt-out clauses in consent agreements, ECRI recommends. 

Another recommendation is to foster a just culture and encourage staff to speak up if issues with AI-based technologies occur. Health systems should take concerns related to the operation and use of AI systems seriously and take steps to investigate and address them, the report says.

Here is the full ECRI Top 10 Patient Safety Concerns topic list for 2026:

1. Navigating the AI Diagnostic Dilemma
2. Reduced Access to Rural Healthcare Increases Health Risks and Disparities
3. Increasing Rates of Preventable Acute Diseases in Communities and Healthcare Settings
4. Effects of Federal Funding Cuts on Healthcare Operations and Patient Safety
5. Lack of Recognition and Reporting of Harm Events
6. Structural and Systemic Barriers Inhibit Equitable Pain Management for Women
7. Persistent Workforce Shortages Continue to Burden Staff and Restrict Access to Care
8. The Impact on System Improvement When a Culture of Blame Hinders Learning
9. Emergency Department Boarding Contributes to Worse Patient Outcomes
10. Persistent Gaps in Manufacturer Packaging and Labeling Design Continue to Undermine Medication Safety Efforts

 

 

About the Author

David Raths

David Raths

David Raths is a Contributing Senior Editor for Healthcare Innovation, focusing on clinical informatics, learning health systems and value-based care transformation. He has been interviewing health system CIOs and CMIOs since 2006.

 Follow him on Twitter @DavidRaths

Sign up for our eNewsletters
Get the latest news and updates