Chatbot Misuse Tops ECRI’s Health Technology Hazard List

Patient safety organization says chatbots can provide valuable assistance but can also provide misleading information that could result in patient harm
Jan. 21, 2026
3 min read

Now in its 18th year, nonprofit patient safety organization ECRI’s Top 10 Health Technology Hazards report identifies AI chatbots as the top concern for 2026. No. 2 on this year’s list is unpreparedness for a “digital darkness” event, or a sudden loss of access to electronic systems and patient information.

 ECRI says that chatbots that rely on large language models (LLMs) — such as ChatGPT, Claude, Copilot, Gemini, and Grok — produce human-like and expert-sounding responses to users’ questions. It notes that the tools are not regulated as medical devices nor validated for healthcare purposes but are increasingly used by clinicians, patients, and healthcare personnel. More than 40 million people daily turn to ChatGPT for health information, according to a recent analysis from OpenAI.
 
ECRI says that chatbots can provide valuable assistance, but they can also provide false or misleading information that could result in significant patient harm. Thus, ECRI advises caution whenever using a chatbot for information that can impact patient care. The organization says that rather than truly understanding context or meaning, AI systems generate responses by predicting sequences of words based on patterns learned from their training data. They are programmed to sound confident and to always provide an answer to satisfy the user, even when the answer isn’t reliable.
 
“Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,” said Marcus Schabacker, M.D., Ph.D., president and chief executive officer of ECRI, in a statement. “Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.”

Chatbots can also exacerbate existing health disparities, according to ECRI’s experts. Any biases embedded in the data used to train chatbots can distort how the models interpret information, leading to responses that reinforce stereotypes and inequities.

 “AI models reflect the knowledge and beliefs on which they are trained, biases and all,” added Schabacker. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems.”

The full ECRI Top 10 Health Technology Hazards for 2026 in ranked order are:
1. Misuse of AI chatbots in healthcare
2. Unpreparedness for a “digital darkness” event, or a sudden loss of access to electronic systems and patient information
3. Substandard and falsified medical products
4. Recall communication failures for home diabetes management technologies
5. Misconnections of syringes or tubing to patient lines, particularly amid slow ENFit and NRFit adoption
6. Underutilizing medication safety technologies in perioperative settings
7. Inadequate device cleaning instructions
8. Cybersecurity risks from legacy medical devices
9. Health technology implementations that prompt unsafe clinical workflows
10. Poor water quality during instrument sterilization

About the Author

David Raths

David Raths

David Raths is a Contributing Senior Editor for Healthcare Innovation, focusing on clinical informatics, learning health systems and value-based care transformation. He has been interviewing health system CIOs and CMIOs since 2006.

 Follow him on Twitter @DavidRaths

Sign up for our eNewsletters
Get the latest news and updates