At RSNA22, Dr. Siddhartha Mukherjee Predicts How AI Will Evolve Forward in Diagnostics

Nov. 28, 2022
Renowned physician and author Dr. Siddhartha Mukherjee delivered the plenary address at RSNA 2022, predicting how artificial intelligence will evolve forward to support diagnostics and therapeutics

Renowned physician and author Siddhartha Mukherjee, M.D., DPhil, delivered the plenary address on Monday, Nov. 28, at RSNA 2022, the annual conference of the Radiological Society of North America, being held this week at Chicago’s vast McCormick Place Convention Center. The 4,188 seats of the Arie Crown Theater were nearly filled when Dr. Mukherjee, introduced by RSNA President Bruce G. Haffty, M.D., walked onto the stage. Mukherjee spent most of the hour delivering his presentation, with the final quarter of the hour given over to a fireside chat between the two physicians.

Mukherjee, an assistant professor at Columbia University and a practicing oncologist at Columbia University Medical Center, is an oncologist and hematologist who has spent decades involved in research around the diagnosis and treatment of cancer. Received the 2011 Pulitzer Price for General Non-Fiction for his 2010 book The Emperor of All Maladies: A Biography of Cancer; last month, he published his latest book, The Song of the Cell. He spoke on the topic “A Peek Into the Future of Biomedical Transformation.”

At the outset of his lecture, Mukherjee said, “To start with, I want to talk about deep learning; that means deploying learning algorithms that imitate human learning. How do we learn?” he asked. “Can machines learn like us? Can machines learn medicine?” He referenced the April 3, 2017 article that he published in The New Yorker, at the request of that magazine’s editors. He pointedly noted that the title that the editors gave the article was “A.I. VERSUS M.D.”—but quickly added that, “Interestingly, that’s a false title. I don’t think that ‘versus’ is the right word. Much of what I’ll tell you is about ‘with,’ not ‘versus.’” And he moved on to reference the philosopher Gilbert Ryle, who, “long before the birth of modern AI, made a distinction between ‘knowing that’ versus ‘knowing how.’ ‘Knowing that,’” Mukherjee quoted Ryle as saying, “is knowing a series of facts; knowing how, is putting those facts together to produce learning. So for example, knowing that means knowing that a bicycle consists of a set of parts. Knowing how involves learning how to ride a bicycle. What you didn’t do was hand your children a manual for bicycle riding, with step-by-step instructions. What you did is that you showed, or learned, how to ride a bicycle.” The set of questions that Ryle explored, Mukherjee went on, was how and why do rules about inputs and outputs work? And can artificial intelligence actually mimic how human intelligence works? The answer will be very important for the future of medicine, he noted.

Mukherjee showed his audience two sets of slides of images that are amusing but also thought-provoking; one set is photos of a dog, along with photos of a blueberry muffin, with some photos extremely similar to the eye; another involved photos of another dog, with photos of a mop, again, with a visual set of commonalities. He also showed the audience a simple photo of a dog and a cat looking at each other, noting that “We still don’t know exactly how it is that our brains distinguish between a dog and a cat.” And, he added, the “dog-versus-muffin” test and “dog-versus-mop” test were actually used by researchers at DARPA—the Defense Advanced Research Projects Agency—in order to try to develop theories about perception and cognition.

Pivoting off that, Mukherjee plunged into the deep end of the clinical research pool, leading his audience through a series of examples involving cancer detection and research around different types of cancers, and reflecting on whether and to what extent artificial intelligence-derived algorithms might be leveraged, and are already in fact beginning to be leveraged, to support cancer research. Among the areas that he cited as being perfectly suited for algorithmic development were mammography and the risk of pancreatic cancer in high-risk patients, as well as in exploring the recurrence of prior cancer. An absolutely key use of AI will be to support the development of parallel “second opinion” clinical decision support, he said, noting that there could be countless situations in which a physician would be developing her or his diagnosis while at the same time, running an AI algorithm in order to see what clinical conclusions machine learning might produce in the same clinical case. “You could imagine you making a diagnosis while the AI makes a diagnosis, and then you compare notes,” he said. “And a second opinion could take place at the same time. And so the algorithm could ask you whether you’re sure of your diagnosis.”

Continuing forward with clinical examples, he highlighted work that has been done to culture cancerous cells in petri dishes in order to study them; he noted that cancers actually do not grow easily in a laboratory environment, and that cancers are in fact “very dependent on factors secreted in the micro-environment.” But that research, he added, could help unlock some of the secrets around how cancers evolve in the body. In any case, he into significant detail looking at several clinical case studies in which machine learning has been used to the support analysis of oncological therapies.

“Here is the most important question,” Mukherjee continued, “one being asked by all of us: what is the endpoint? The best endpoint is lives saved. So if you detect early cancers, there are many reasons to focus on numbers of lives saved, because you don’t know whether the downstage cancer or less invasive cancer would end up killing the person or not. There are countless trials in which the endpoint might be a loose or weak endpoint. Obviously, the measure of lives saved is difficult, because it involves decades of research. I and others have gone to the FDA [Food and Drug Administration], and have said, very few companies can do a lives-saved trial; it will have to be an NIH-funded and -sponsored trial. I’d like all of us to think about how we create the definitive study.” And he concluded on an optimistic note, stating that he believes that a great deal of progress will be made in all these areas in the next several years.

In response to a question from Haffty during their 15-minute fireside chat at the end of the session, Mukherjee said that he believed that healthcare consumers are becoming more and more aware of their autonomy, and are demanding to be treated as intelligent, thoughtful individuals, with the right to be given robust information on diagnoses, research, therapies, and everything around their patient care.

Sponsored Recommendations

Care Access Made Easy: A Guide to Digital Self-Service for MEDITECH Hospitals

Today’s consumers expect access to digital self-service capabilities at multiple points during their journey to accessing care. While oftentimes organizations view digital transformatio...

Going Beyond the Smart Room: Empowering Nursing & Clinical Staff with Ambient Technology, Observation, and Documentation

Discover how ambient AI technology is revolutionizing nursing workflows and empowering clinical staff at scale. Learn about how Orlando Health implemented innovative strategies...

Enabling efficiencies in patient care and healthcare operations

Labor shortages. Burnout. Gaps in access to care. The healthcare industry has rising patient, caregiver and stakeholder expectations around customer experiences, increasing the...

Findings on the Healthcare Industry’s Lag to Adopt Technologies to Improve Data Management and Patient Care

Join us for this April 30th webinar to learn about 2024's State of the Market Report: New Challenges in Health Data Management.