JASON Report: Hype About AI in Healthcare Might Be Justified This Time

Jan. 22, 2018
Predicting the impact of artificial intelligence on healthcare is occupying a lot of technology experts these days, including JASON, a federal advisory group made up of independent elite scientists.

Covering the annual conference of the Radiological Society of North America last November, Healthcare Informatics editor-in-chief Mark Hagland saw a surge in emphasis on artificial intelligence and machine learning. “The emphasis was evident in keynote addresses and presentations, and absolutely evident in the exhibit halls, where large banners and bold signage proclaimed a new era of innovation around new forms of supportive intelligence,” he wrote.

Everyone is trying to figure out whether AI is being over-hyped. Google CEO Sundar Pichai was recently quoted as saying AI is more profound than electricity or fire. That’s pretty profound.

Trying to predict the impact of AI on healthcare is occupying a lot of technology experts these days. For instance, the federal advisory group JASON, made up of independent elite scientists brought together by the Mitre Corp., recently released a 69-page report analyzing AI in healthcare. (A few years ago a JASON task force took a critical look at healthcare interoperability and proposed a new approach involving modern web technologies and APIs.)

JASON was asked to consider how AI could shape the future of public health, community health, and health care delivery. The report’s premise is that the hype about AI might be justified now because three forces have primed our society to embrace new approaches that may be enabled by advances in AI: 1) frustration with the legacy medical system; 2) ubiquity of networked smart devices; 3) acclimation to convenience and at-home services like those provided through Amazon and others.

Regarding the use of AI in clinical practice, JASON notes that “the process of developing a new technique as an established standard of care uses the robust practice of peer-reviewed R&D, and can provide safeguards against the deceptive or poorly validated use of AI algorithms.

 But the report notes that the use of AI diagnostics as replacements for established steps in medical standards of care will require far more validation than the use of such diagnostics to provide supporting information that aids in decisions. JASON recommends that more work should be done to prepare AI results for the rigorous approval procedures needed for acceptance for clinical practice. Testing and validation approaches need to be created for AI algorithms to evaluate performance of the algorithms under conditions that differ from the training set.

The JASON report also focuses on how AI will be used to power health-related mobile monitoring devices and apps. “Mobile devices will create massive datasets that, in theory, could open new possibilities in the development of AI-based health and health care tools.”

It recommends the development of data infrastructure to capture and integrate data generated from smart devices to support AI applications. It also suggests regulations that require approaches to ensure privacy and transparency of data use.

JASON members stress that it is important to understand the limitations of AI methods in healthcare. “There is potential for the proliferation of misinformation that could cause harm or impede the adoption of AI applications for health. Websites, apps, and companies have already emerged that appear questionable based on information available.”

It recommends the development and adoption of transparent processes and policies to ensure reproducibility for large-scale computational models. “To guard against the proliferation of misinformation in this emerging field, support the engagement of learned bodies to encourage and endorse best practices for deployment of AI applications in health.”

Another limitation the report highlights is that “while AI algorithms such as deep learning can produce amazing results, work is needed to develop confidence that they will perform as required in situations where health and life are at risk. This is independent of the hope that there will be the kind of continued improvements that have occurred in image recognition or various aspects of natural language processing. The issues here are more pragmatic.”

They note that no matter how carefully the training data has been assembled, there is the risk that it does not closely enough match what will be encountered in real. Another observation is that not all errors are equally important (or unimportant).

The JASON group will continue to look at some important questions involving AI in healthcare, including:

• What are the most high-value areas (example, reducing the cost of expensive treatments, prevention of mortality or morbidity in disproportionately affected populations, improvement in productivity due to better health, or focusing on risk mitigation where the impacted population is large) where artificial intelligence could be focused to contribute quickly and efficiently?

• What are the considerations for the data sources needed to support the development of artificial intelligence programs for health and health care. For example, what are the needed data quality, breadth, and depth necessary to support the deployment of appropriate artificial intelligence technology for health and healthcare?

• What are the potential unintended consequences, including real or perceived dangers, of artificial intelligence focused on improving health and health care? What are the potential risks of artificial intelligence inadvertently exacerbating health inequalities?

• What workforce changes may be needed to ensure effective broad-based adoption of data-rich artificial intelligence applications?

Sponsored Recommendations

The Healthcare Provider's Guide to Accelerating Clinician Onboarding

Improve clinician satisfaction and productivity to enhance patient care

ASK THE EXPERT: ServiceNow’s Erin Smithouser on what C-suite healthcare executives need to know about artificial intelligence

Generative artificial intelligence, also known as GenAI, learns from vast amounts of existing data and large language models to help healthcare organizations improve hospital ...

TEST: Ask the Expert: Is Your Patients' Understanding Putting You at Risk?

Effective health literacy in healthcare is essential for ensuring informed consent, reducing medical malpractice risks, and enhancing patient-provider communication. Unfortunately...

From Strategy to Action: The Power of Enterprise Value-Based Care

Ever wonder why your meticulously planned value-based care model hasn't moved beyond the concept stage? You're not alone! Transition from theory to practice with enterprise value...