At RSNA 2021, a Focus on the Ethics of AI in Radiology

Dec. 7, 2021
At RSNA 2021, several educational sessions were devoted to the discussion of broad issues around the ethics of adopting artificial intelligence and machine learning in radiologic practice and radiography

Several educational sessions at the RSNA Annual Conference, held November 29 through December 2 at Chicago’s McCormick Place Convention Center, and sponsored by the Oak Brook, Ill.-based Radiological Society of North America, focused on the ethics of applying artificial intelligence (AI) and machine learning (ML) to radiological practice and operations in healthcare.

During a session held on Monday, November 29, and entitled “Ethics of AI in Radiology,” David B. Larson, M.D., MBA, a professor of radiology (pediatric radiology) in the Department of Radiology at Stanford University, where he also serves as the vice chair for education and clinical operations, made several key points. Dr. Larson stated several moral obligations in the application of AI to radiological practice, saying that those applying AI and machine learning must adhere to seven ethical obligations. All those involved in applying AI to any clinical area, per the 2013 Hastings Center Report, must do the following:

1.      Respect the rights and dignity of patients

2.       Respect clinician judgments

3.       Provide optimal care to each patient

4.       Avoid imposing nonclinical risks and burdens on patients

5.       Address health inequalities

6.       Conduct continuous learning activities that improve the quality of clinical care and healthcare systems

7.       Contribute to the common purpose of improving the quality and value of clinical care and healthcare systems

Those “Seven Obligations” are also referenced in an article that Dr. Larson referenced during the RSNA session, a scholarly report that he had coauthored with several fellow clinicians and researchers. Entitled “Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework,” the article was published in the June 2020 issue of the clinical journal Radiology. Larsons coauthors were David C. Magnus, Ph.D., Matthew P. Lungren, M.D., M.P.H., Nigam H. Shah, MBBS, Ph.D., and Curtis P. Langlotz, M.D., Ph.D. The abstract of the article states clearly that, “In this article, the authors propose an ethical framework for using and sharing clinical data for the development of artificial intelligence (AI) applications. The philosophical premise is as follows: when clinical data are used to provide care, the primary purpose for acquiring the data is fulfilled. At that point, clinical data should be treated as a form of public good, to be used for the benefit of future patients. In their 2013 article, Faden et al argued that all who participate in the health care system, including patients, have a moral obligation to contribute to improving that system. The authors extend that framework to questions surrounding the secondary use of clinical data for AI applications. Specifically, the authors propose that all individuals and entities with access to clinical data become data stewards, with fiduciary (or trust) responsibilities to patients to carefully safeguard patient privacy, and to the public to ensure that the data are made widely available for the development of knowledge and tools to benefit future patients. According to this framework, the authors maintain that it is unethical for providers to ‘sell’ clinical data to other parties by granting access to clinical data, especially under exclusive arrangements, in exchange for monetary or in-kind payments that exceed costs. The authors also propose that patient consent is not required before the data are used for secondary purposes when obtaining such consent is prohibitively costly or burdensome, as long as mechanisms are in place to ensure that ethical standards are strictly followed. Rather than debate whether patients or provider organizations ‘own’ the data, the authors propose that clinical data are not owned at all in the traditional sense, but rather that all who interact with or control the data have an obligation to ensure that the data are used for the benefit of future patients and society.”

In the article, Larson and his coauthors note that “[T]he advent of AI technology has given greater urgency to the question of who should control and profit from deidentified clinical data. Experience grappling with these and other related questions at our institution has led us to develop an ethical framework to guide our use and sharing of clinical data for the development of AI applications.”

Per all this Larson emphasized elements in the 2013 Hastings Center Report that he said must help to guide radiologists and others going forward. “Patients, researchers, clinicians, administrators, payers, and patients are all obligated to work together on ethical practices,” he said. Very specifically, he underscored the fundamental ideas that “No single entity is entitled to directly profit” from the development or adoption of AI-based algorithms in radiologic practice; while at the same time, the “dissemination of the public good” should be the focus of all such work. “We also need to think about data as property, even though data is not a traditional form of property; it’s not divided or consumed, and can be easily replicated at full fidelity by multiple parties.” He said. Indeed, he added, “Ownership of data is an imprecise concept. It would be better to refer to rights to control access and use of data and rights to a share of profit.”

Further, Larson noted that “Many patients with many attributes leads to data leads to information leads to generalizable knowledge and tools. A lot of what we’ve seen in terms of the value of AI is to develop these value-creating activities and tools. The data is the raw materials. Value-adding activities such as discovery, design, and development create higher-level information and Knowledge.”

With regard to interactions with third parties, Larson said that the questions remain, “Who should be given access to the data, and who should be allowed to profit? Everyone who participates, needs to act as a responsible data steward,” he said. “They take care of the data and have the same obligations as providers. It is ethical to share data, if: privacy is safeguarded; receiving organizations accept their role as data stewards; all parties adhere to data agreements;” and the parties involved “don’t share data further,” meaning, they decline to identify patients, and share data solely to benefit patients and medicine. Nor, he added, should patients profit in any way. “Data is a public good,” he underscored.

During that same session, Julius Chapiro, M.D., Ph.D., spoke on the topic of “Academic-Industrial Partnerships in AI: Why, When, and How?” Dr. Chapiro asked, “What is an academic-industrial partnership? Creating solutions and answering challenging clinical questions using shared high technology resources from both academic institutions and medical industry.” He emphasized that the combination of individuals with medical backgrounds, academic (Ph.D.) backgrounds, engineering backgrounds, and informatics backgrounds, can help to create turbo-charged teams that can solve problems together and leverage existing capabilities.

Chapiro said the guiding principles involved must be to: “mutually understand the mission and culture of each partner; mutually understand the objectives, benefits, and timelines” of the parties involved; “understand each other’s capabilities and the legal framework” of whatever arrangements are developed; and “respect each other’s limitations.” He added that “There will be consolidation of players in the field,” referencing the large number of potential collaborations possible between academic medicine and industry; at the same time, he noted, the traditional understanding of how business consolidation takes place, with larger corporate organizations acquiring and absorbing smaller ones, is actually not turning out to be happening in the AI arena in medicine. Instead, he said, “Swarms of smaller fish are eating the big fish, in the AI area.”

Radiographers’ role in AI adoption

On Monday morning, the Associated Sciences Consortium (ASC) for the Radiological Society of North America (RSNA), which is sponsored by the International Society of Radiographers & Radiological Technologists (ISRRT), sponsored a session entitled “Artificial Intelligence in the Hands of Medical Imaging and Radiation Therapy Professionals Part II: Getting Ready for a Future with AI,” a discussion that brought to the fore the potential role of radiographers/radiation technologists and radiation therapists in the forward evolution of the broad discourse around AI adoption, in this case, in actual diagnostic imaging processes themselves.

Among the speakers were Canadian radiographer Caitlin Gillian and Norwegian radiographer Håkon Helge Hjemly. Caitlin Gillan, MEd, manager of education and practice in the Joint Department of Medical Imaging at University Health Network in Toronto, Ontario, Canada, and a radiation technologist by training, said that “Our practice environments lend themselves to AI. We are used to using technology—DICOM, etc. Also, our environment is data rich, with imaging data, dosimetric data, operations data (scheduling, etc.), and incredibly under-leveraged patient reporting data.” Further, she said, “For AI to work, it must involve repetitive, rule-based tasks. And it should provide value for effort for humans based on frequency. And, in terms of quality assurance, machine actions need to be reviewed before reaching patients.” What’s more, Gillan said, use of AI for any particular action must “pass the Turing Test: a human should not be able to distinguish between a human who performed a task and a machine that performed it.” On that score, she said, most AI-performed actions still have a ways to go to be truly satisfactory. Among the tasks that are happening in her organization are treatment planning and dosimetry. Among the areas that and her colleagues are currently working on include “clinical integration of machine learning for curative-intent radiation treatment of patients with prostate cancer.” With regard to AI-facilitated dosimetry processes, she foresees advantages such as more adaptive planning; time for complex or unique patients; and higher throughput. Importantly, she noted, AI-generated data can be fed back into the system for analysis. In all this, she said radiation therapists and radiographers need a seat at the table as the discussions around AI adoption move forward in radiology; they can contribute with regard to education, engagement, research, and advocacy.

Also speaking on that panel was Håkon Helge Hjemly, director of policy at the Norsk Radiografforbund—the Norwegian Society of Radiographers (Oslo), and the current president of the European Federation of Radiographer Societies (EFRS). “The number and complexity of imaging procedures are increasing rapidly. Demands placed on imaging professionals are reaching unsustainable levels,” Hjemly told the audience. “What can be done? How about a CT scanner with intelligent imaging inside? IS that where things are going? CT for dummies? We should be cautious and educate the vendors and developers of diagnostic imaging technologies,” he said, emphasizing that radiographers need to be fully empowered to help set healthy boundaries around where AI-based algorithms can be applied to radiography—and where they can’t. “A professional is often recognized by its own set of ethnical guidelines—local, national, international,” he said. And “Professional ethics delineates how broader ethnical standards, such as responsibility, integrity, fairness, transparency, and avoidance of harm. Ethnics guidelines need to be regularly revised over time. Because many choices made in radiology service are critical, trust is required. Radiographers are the interface between imaging technology and patients, and patients rely on and trust us to do the right thing. Patients need to feel trust.”

What’s more, Hjemly said, “AI software rarely provides any human-understandable explanation or justification for its predictions. This makes explaining the underlying logic of a particular system very difficult, how decisions were made, whether there are errors and how these might have occurred.” He emphasized that, while AI-based algorithms have potential to be usefully applied to diagnostic radiography and radiotherapy, professional radiographers need to be engaged in the discussion as to which algorithms should be developed, and how they should be applied.

Sponsored Recommendations

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...

Spotlight on Artificial Intelligence

Unlock the potential of AI in our latest series. Discover how AI is revolutionizing clinical decision support, improving workflow efficiency, and transforming medical documentation...

Beyond the VPN: Zero Trust Access for a Healthcare Hybrid Work Environment

This whitepaper explores how a cloud-enabled zero trust architecture ensures secure, least privileged access to applications, meeting regulatory requirements and enhancing user...

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...