AI and Radiology: A Hard Slog Now, Leading to Sustained Advancements Later?

Dec. 12, 2021
Radiologists, radiographers, and informaticists are discovering that the path forward on adopting artificial intelligence in imaging is turning out to be harder and more complex than first imagined

Numerous educational sessions were devoted to issues around the adoption of artificial intelligence (AI) and machine learning (ML) during the RSNA Annual Conference, which was held once again at Chicago’s McCormick Place (after the 2020 meeting, which had been confined to all-virtual sessions, because of the COVID-19 pandemic), Nov. 29-Dec. 3.

And the themes were fascinating, even as different panels and presentations looked at different elements and angles around the subject, whether around the application of AI algorithms to radiological diagnostics, workflow optimization, or radiographic management of diagnostic imaging procedures itself.

When it came to ethics in the adoption of AI, during a session on Monday, Nov. 29 entitled “Ethics of AI in Radiology,” David B. Larson, M.D., MBA, a professor of radiology (pediatric radiology) in the Department of Radiology at Stanford University, where he also serves as the vice chair for education and clinical operations, made several key points. Dr. Larson stated several moral obligations in the application of AI to radiological practice, saying that those applying AI and machine learning must adhere to seven ethical obligations. All those involved in applying AI to any clinical area, per the 2013 Hastings Center Report, must do the following:

1.  Respect the rights and dignity of patients

2.  Respect clinician judgments

3.  Provide optimal care to each patient

4.  Avoid imposing nonclinical risks and burdens on patients

5.  Address health inequalities

6.  Conduct continuous learning activities that improve the quality of clinical care and healthcare systems

7.  Contribute to the common purpose of improving the quality and value of clinical care and healthcare systems

Those “Seven Obligations” are also referenced in an article that Dr. Larson referenced during the RSNA session, a scholarly report that he had coauthored with several fellow clinicians and researchers. Entitled “Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework,” the article was published in the June 2020 issue of the clinical journal Radiology.

The ethics of AI and machine learning adoption were one very important subject of discussion, but there were also numerous discussions around the clinical, technical, operational, strategic, and policy aspects of AI. Indeed, if anything was clear at RSNA 2021, it was the very heterogeneity of issues involved.

One of the best discussions I saw was moderated by Paul Chang, M.D., on Tuesday afternoon, November 30. As I wrote later last week, “One of the most stimulating panels at the conference—which was held this year as in years past, at Chicago’s vast McCormick Place Convention Center—this year was entitled ‘The Business of AI in Radiology: A Cost, a Long-term Investment, or an Immediate Business Opportunity?’ Held Tuesday afternoon, November 30, the panel was moderated by Paul J. Chang, M.D., a professor of radiology at the University of Chicago health system in Chicago. Dr. Chang’s fellow panelists were Nina Kottler, M.D., M.S., associate medical director at the El Segundo, Calif.-based Radiology Partners national radiology group practice; Hari Trivedi, M.D., assistant professor of radiology and co-director of the HITI Lab at Emory University; Luciano Prevedello, M.D., M.P.H., of The Ohio State University Wexner Medical Center; and Mona G. Flores, M.D., global head of medical AI at the Santa Clara, Calif.-based NVIDIA Corporation. Dr. Flores appeared virtually, while everyone else was present in person at McCormick Place.”

I wrote that “Dr. Chang initiated the discussion by making a number of statements. After several years of early adoption of artificial intelligence, he said, AI has moved forward into early stages of the ‘Gartner Hype Cycle’ showing the classic depiction of the ‘Gartner Hype Cycle,’ including the following stages: ‘innovation trigger; peak of inflated expectations; trough of disillusionment; slop of enlightenment; plateau of productivity: appropriate consumption.’ Right now, AI adoption in radiology, he said, is living through the ‘trough of disillusionment, after five years.’ Indeed, he said, ‘There’s nothing new under the sun when it comes to new technology. We over type, over promise, and under deliver, and fall into the trough of disillusionment. We eventually learn appropriate consumption.’ And, he added, with regard to the trough of disillusionment, signs include ‘lots of VC [venture capital] action, lots of investment, but a lack of significant consolidation.’”

Dr. Chang’s comments were absolutely on point: a certain level of confusion tinged with discouragement has set in, in that it has become clear that adopting AI in diagnostic imaging and in radiology practice, is turning out to be more complex and challenging than some might have believed, early on. Indeed, many will recall when technical experts at one vendor organization about five years ago attempted to simply dump literally millions of measurements and data points into a gigantic database in order to create a set of such measurements and data points that could be used I diagnostic imaging work. As it turned out, the project ended up being a complete failure, as those types of data points ended up not being useful in actual radiological diagnostic work.

If such projects as that one have ended up in failure, just where will the early AI adoption work bear fruit? One member of that panel shared her understanding of the answer: Nina Kottler, M.D., M.S., associate medical director at the El Segundo, Calif.-based Radiology Partners national radiology group practice. As Dr. Kottler said during that panel discussion, “I am in a private practice. We have a national onsite radiology practice. We have about 3,000 radiologists and do about 10 percent of the radiology in the U.S. We’ve invested in AI and, in one case, we created our own NLP [natural language processing] AI algorithm, in 2017, deployed it in 2018, have deployed it to about 1200 radiologists so far,” she said.

Further, Kottler said, “We’ve developed an NLP platform. These tools are clinical tools. You have to go do training to work with it. Radiologists need feedback on how well they’re using it. The NLP tool has gone through millions of reports. We generally pilot something first. Our pilots are big because our practice is big. We did partner with a vendor, and have seven of their FDA-cleared algorithms. We have millions of exams going through their tool. Piloting another NLP algorithm that provides summary of findings.” She added that, “At the time we started, vendors were exploring image interpretation-type tasks, including detection, and maybe diagnosis. But that for us was not a great use case. With NLP and doing things we’re not good at, initiating a workflow, bringing things together, it was a 90-percent ROI.”

The issues involved extend beyond radiologist practice into radiologic technology and radiology. On Monday morning Nov. 29, the Associated Sciences Consortium (ASC) for the Radiological Society of North America (RSNA), which is sponsored by the International Society of Radiographers & Radiological Technologists (ISRRT), sponsored a session entitled “Artificial Intelligence in the Hands of Medical Imaging and Radiation Therapy Professionals Part II: Getting Ready for a Future with AI,” a discussion that brought to the fore the potential role of radiographers/radiation technologists and radiation therapists in the forward evolution of the broad discourse around AI adoption, in this case, in actual diagnostic imaging processes themselves. Among the speakers was Caitlin Gillan, MEd, manager of education and practice in the Joint Department of Medical Imaging at University Health Network in Toronto, Ontario, Canada, and a radiation technologist by training. Gillan stated that, “For AI to work, it must involve repetitive, rule-based tasks. And it should provide value for effort for humans based on frequency. And, in terms of quality assurance, machine actions need to be reviewed before reaching patients.” What’s more, Gillan said, use of AI for any particular action must “pass the Turing Test: a human should not be able to distinguish between a human who performed a task and a machine that performed it.” On that score, she said, most AI-performed actions still have a ways to go to be truly satisfactory. Among the tasks that are happening in her organization are treatment planning and dosimetry.

So, across imaging, issues are emerging that speak to the complexity of applying AI-derived algorithms to both to radiological practice and to radiography. Per that, Dr. Kottler’s comments in the Nov. 30 panel seemed particularly on point here. The reality is that the adoption of AI will inevitably require teams of radiologists and others to drill down into the possibilities and develop solutions that work for their clinical and technical teams, and to then continuously analyze and evaluate the results, in order to determine what actually works. In other words, some initiatives of the past few years, in which data people (in one case, with a large software vendor) have attempted to mass-dump huge numbers of measurements into data lakes, hoping that those abstract measurements and data could help in diagnostic efforts, have simply not panned out.

But one hope-inducing sign is this one: as more and more clinicians, data scientists, and others, get involved in developing these algorithms for radiology and radiography, over time, case studies will emerge that will be helpful and that can provide templates for other teams. And that is inevitably the truth of so much that happens in healthcare: early progress in any particular area that requires data-based inputs, will involve hard slogs early on, and then the emergence of successful case studies that can be replicated over time. There’s simply no “plug and play” when it comes to AI, in radiology or in any other medical discipline—and clinician and informaticist leaders are learning that now, in real time.

Sponsored Recommendations

A Cyber Shield for Healthcare: Exploring HHS's $1.3 Billion Security Initiative

Unlock the Future of Healthcare Cybersecurity with Erik Decker, Co-Chair of the HHS 405(d) workgroup! Don't miss this opportunity to gain invaluable knowledge from a seasoned ...

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...

Spotlight on Artificial Intelligence

Unlock the potential of AI in our latest series. Discover how AI is revolutionizing clinical decision support, improving workflow efficiency, and transforming medical documentation...

Beyond the VPN: Zero Trust Access for a Healthcare Hybrid Work Environment

This whitepaper explores how a cloud-enabled zero trust architecture ensures secure, least privileged access to applications, meeting regulatory requirements and enhancing user...