Survey of Oncologists Finds Agreement, Concerns Over AI Use

March 28, 2024
While 76 percent of respondents noted that oncologists should protect patients from biased AI tools, only 28 percent were confident that they could identify AI models that contain such bias

A recent survey of more than 200 oncologists by researchers at Dana-Farber Cancer Institute found broad agreement on how artificial intelligence can be responsibly integrated into some aspects of patient care. Respondents also expressed concern about medico-legal issues as well as how to protect patients from hidden biases of AI.

The survey, described in a paper published in JAMA Network Open, found that 85 percent of respondents said that oncologists should be able to explain how AI models work, but only 23 percent thought patients need the same level of understanding when considering a treatment. Just over 81 percent of respondents felt patients should give their consent to the use of AI tools in making treatment decisions.

When the survey asked oncologists what they would do if an AI system selected a treatment regimen different from the one they planned to recommend, the most common answer, offered by 37 percent of respondents, was to present both options and let the patient decide.

When asked who has responsibility for medical or legal problems arising from AI use, 91 percent of respondents pointed to AI developers. This was much higher than the 47 percent who felt the responsibility should be shared with physicians, or the 43 percent who felt it should be shared with hospitals.

While 76 percent of respondents noted that oncologists should protect patients from biased AI tools – which reflect inequities in who is represented in medical databases – only 28 percent were confident that they could identify AI models that contain such bias.

"The findings provide a first look at where oncologists are in thinking about the ethical implications of AI in cancer care," said Andrew Hantel, M.D., a faculty member in the Divisions of Leukemia and Population Sciences at Boston-based Dana-Farber Cancer Institute, in a statement.  Hantel led the study with Gregory Abel, M.D., M.P.H., a senior physician at Dana-Farber. "AI has the potential to produce major advances in cancer research and treatment, but there hasn't been a lot of education for stakeholders – the physicians and others who will use this technology – about what its adoption will mean for their practice, Hantel added. 

"It's critical that we assess now, in the early stages of AI's application to clinical care, how it will impact that care and what we need to do to make sure it's deployed responsibly. Oncologists need to be part of that conversation. This study seeks to begin building a bridge between the development of AI and the expectations and ethical obligations of its end-users."

Hantel noted that AI is currently used in cancer care primarily as a diagnostic tool – for detecting tumor cells on pathology slides and identifying tumors on X rays and other radiology images. However, new AI models are being developed that can assess a patient's prognosis and may soon be able to offer treatment recommendations. This capability has raised concerns over who or what is legally responsible should an AI-recommended treatment result in harm to a patient.

"AI is not a professionally licensed medical practitioner, yet it could someday be making treatment decisions for patients," Hantel said. "Is AI going to be its own practitioner, will it be subject to licensing, and who are the humans who could be held responsible for its recommendation? These are the kind of medico-legal issues that need to be resolved before the technology is implemented.

"Our survey found that while nearly all oncologists felt AI developers should bear some responsibility for treatment decisions generated by AI, only half felt that responsibility also rested with oncologists or hospitals,” added Hantel. “Our study gives a sense of where oncologists currently land on this and other ethical issues related to AI and, we hope, serves as a springboard for further consideration of them in the future."

Sponsored Recommendations

ASK THE EXPERT: ServiceNow’s Erin Smithouser on what C-suite healthcare executives need to know about artificial intelligence

Generative artificial intelligence, also known as GenAI, learns from vast amounts of existing data and large language models to help healthcare organizations improve hospital ...

TEST: Ask the Expert: Is Your Patients' Understanding Putting You at Risk?

Effective health literacy in healthcare is essential for ensuring informed consent, reducing medical malpractice risks, and enhancing patient-provider communication. Unfortunately...

From Strategy to Action: The Power of Enterprise Value-Based Care

Ever wonder why your meticulously planned value-based care model hasn't moved beyond the concept stage? You're not alone! Transition from theory to practice with enterprise value...

State of the Market: Transforming Healthcare; Strategies for Building a Resilient and Adaptive Workforce

The U.S. healthcare system is facing critical challenges, including workforce shortages, high turnover, and regulatory pressures. This guide highlights the vital role of technology...