Physician Leaders Look at How AI Might Transform Physician Practice

March 23, 2022
Three physician leaders take a hard look at the evolution of artificial intelligence in physician practice, arguing that evidence-based standards need to be applied to AI and machine learning in care delivery

In a “Viewpoint” article in JAMA Network online, three physician leaders have looked at how artificial intelligence (AI), still early in its development, might transform physician practice and care delivery, arguing that rigorous clinician review must be applied, as AI and machine learning techniques and tools become applied to medical practice.

In “Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence,” Cornelius A. James, M.D., Robert M. Wachter, M.D., and James O. Woolliscroft, M.D., look at the current state of affairs around AI and machine learning, and what AI development will require of physician leaders. The physicians believe that the advance of AI and machine learning is inevitable, and that, “Importantly, equipping clinicians with the skills, resources, and support necessary to use AI-based technologies is now recognized as essential to successful deployment of AI in health care. To do so, clinicians need to have a realistic understanding of the potential uses and limitations of medical AI applications. Overlooking this fact risks clinician cynicism and suboptimal patient outcomes.”

In the article, they write that, “A decade after the transformation of health care from a pen-and-paper–based record system to an EHR-based enterprise, AI represents entering a new era of potentially practice-changing technology. Billions of dollars are being invested in health care AI and related research. Hundreds of AI-based start-ups have been founded, and the digital giants (such as Apple, Microsoft, Google, and Amazon) are investing heavily. Anticipating that AI is entering the health care mainstream, the National Academy of Medicine report Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril recommended engaging and educating the community regarding data science and AI.”

Indeed, they write, “Importantly, equipping clinicians with the skills, resources, and support necessary to use AI-based technologies is now recognized as essential to successful deployment of AI in health care. To do so, clinicians need to have a realistic understanding of the potential uses and limitations of medical AI applications. Overlooking this fact risks clinician cynicism and suboptimal patient outcomes.”

The article’s authors zero in on the problems that have been identified with regard to sepsis-related algorithms, writing that, “Despite relatively weak evidence supporting the use of AI in routine clinical practice health care settings, AI models continue to be marketed and deployed. A recent example is the Epic Sepsis Model. While this model was widely implemented in hundreds of US hospitals, a recent study showed that it performed significantly worse in correctly identifying patients with early sepsis and improving patient outcomes in a clinical setting compared with performance observed during development of the model. Studies like this highlight the need for rigorous reporting standards and review of AI products,” they write. “A robust federal approval process such as the process used for pharmaceuticals is necessary. At the institutional level, a process analogous to the Clinical Laboratory Improvement Amendments certification for laboratory studies could help ensure that not only are the criteria for adoption and implementation of algorithms rigorous but also that ongoing assessments of their applicability and accuracy occur. However, consensus as to the information required to make clinical implementation decisions is lacking. Therefore, standardized, reliable, evidence-based reporting guidelines for AI clinical trials and relevant studies used to evaluate the utility of AI technologies needs to be developed. Absent such reporting standards, clinician trust in and appropriate use of AI-based technologies will be hindered.”

In the end, the authors write that “AI will soon become ubiquitous in health care. Building on lessons learned as implementation strategies continue to be devised, it will be essential to consider the key role of clinicians as end users of AI-developed algorithms, processes, and risk predictors. It is imperative that clinicians have the knowledge and skills to assess and determine the appropriate application of AI outputs, for their own clinical practice and for their patients. Rather than being replaced by AI, these new technologies will create new roles and responsibilities for clinicians.”

Dr. James is affiliated with the Departments of Internal Medicine and Pediatrics at the University of Michigan; Dr. Wachter, with the Department of Internal Medicine at the University of California, San Francisco; and Dr. Wooliscroft, with the Departments of Internal Medicine and Learning Health Sciences at the University of Michigan. Wachter is well-known in U.S. healthcare as the author of the 2015 book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age

Sponsored Recommendations

Elevating Clinical Performance and Financial Outcomes with Virtual Care Management

Transform healthcare delivery with Virtual Care Management (VCM) solutions, enabling proactive, continuous patient engagement to close care gaps, improve outcomes, and boost operational...

Examining AI Adoption + ROI in Healthcare Payments

Maximize healthcare payments with AI - today + tomorrow

Addressing Revenue Leakage in Hospitals

Learn how ReadySet Surgical helps hospitals stop the loss of earned money because of billing inefficiencies, processing and coding of surgical instruments. And helps reduce surgical...

Care Access Made Easy: A Guide to Digital Self Service

Embracing digital transformation in healthcare is crucial, and there is no one-size-fits-all strategy. Consider adopting a crawl, walk, run approach to digital projects, enabling...