One Radiologist Expert in AI Shares His Perspectives on the Road Ahead

Jan. 30, 2022
David B. Larson, M.D., a professor of radiology at Stanford University, shares his perspectives on the challenges and opportunities in building artificial intelligence algorithms in radiology

As Healthcare Innovation noted in a December 6 report, “Several educational sessions at the RSNA Annual Conference, held November 29 through December 2 at Chicago’s McCormick Place Convention Center, and sponsored by the Oak Brook, Ill.-based Radiological Society of North America, focused on the ethics of applying artificial intelligence (AI) and machine learning (ML) to radiological practice and operations in healthcare. During a session held on Monday, November 29, and entitled ‘Ethics of AI in Radiology,’ David B. Larson, M.D., MBA, a professor of radiology (pediatric radiology) in the Department of Radiology at Stanford University, where he also serves as the vice chair for education and clinical operations, made several key points. Dr. Larson stated several moral obligations in the application of AI to radiological practice, saying that those applying AI and machine learning must adhere to seven ethical obligations. All those involved in applying AI to any clinical area, per the 2013 Hastings Center Report, must do the following:

1.      Respect the rights and dignity of patients

2.       Respect clinician judgments

3.       Provide optimal care to each patient

4.       Avoid imposing nonclinical risks and burdens on patients

5.       Address health inequalities

6.       Conduct continuous learning activities that improve the quality of clinical care and healthcare systems

7.       Contribute to the common purpose of improving the quality and value of clinical care and healthcare systems”

As our Dec. article noted, Dr. Larson, along with coauthors David C. Magnus, Ph.D., Matthew P. Lungren, M.D., M.P.H., Nigam H. Shah, MBBS, Ph.D., and Curtis P. Langlotz, M.D., Ph.D., had included those seven recommendations in their June 2020 Radiology article entitled “Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework.” The authors had written that, “In this article, the authors propose an ethical framework for using and sharing clinical data for the development of artificial intelligence (AI) applications. The philosophical premise is as follows: when clinical data are used to provide care, the primary purpose for acquiring the data is fulfilled. At that point, clinical data should be treated as a form of public good, to be used for the benefit of future patients. In their 2013 article, Faden et al argued that all who participate in the health care system, including patients, have a moral obligation to contribute to improving that system. The authors extend that framework to questions surrounding the secondary use of clinical data for AI applications. Specifically, the authors propose that all individuals and entities with access to clinical data become data stewards, with fiduciary (or trust) responsibilities to patients to carefully safeguard patient privacy, and to the public to ensure that the data are made widely available for the development of knowledge and tools to benefit future patients. According to this framework, the authors maintain that it is unethical for providers to ‘sell’ clinical data to other parties by granting access to clinical data, especially under exclusive arrangements, in exchange for monetary or in-kind payments that exceed costs. The authors also propose that patient consent is not required before the data are used for secondary purposes when obtaining such consent is prohibitively costly or burdensome, as long as mechanisms are in place to ensure that ethical standards are strictly followed. Rather than debate whether patients or provider organizations ‘own’ the data, the authors propose that clinical data are not owned at all in the traditional sense, but rather that all who interact with or control the data have an obligation to ensure that the data are used for the benefit of future patients and society.”

Further, in that article, Larson and his coauthors noted that “[T]he advent of AI technology has given greater urgency to the question of who should control and profit from deidentified clinical data. Experience grappling with these and other related questions at our institution has led us to develop an ethical framework to guide our use and sharing of clinical data for the development of AI applications.”

Earlier this month, Healthcare Innovation Editor-in-Chief Mark Hagland was able to catch up with Dr. Larson to engage in a conversation in which he was able to elaborate further on the adoption of AI in radiology more broadly. Below are excerpts from that interview.

Is the adoption of AI in radiology about where you might have expected it to be?

Yes, it is. As soon as the technology became available, there was the thinking that this was going to revolutionize radiology, and do it quickly. There were even hyperbolic predictions that it would replace radiologists; but many realized that it wouldn’t happen, and not in that way. And in fact, we’re so early on in this technology, that there’s a long way to go. We are starting to see a number of applications being developed; the applications are pretty basic. And what we’re starting to have to work through are all the challenging, kind of boring, block-and-tackle challenges. And Chan

Everyone saw some early stumbles; for example, senior leaders at one healthcare IT vendor had believed that they could essentially dump millions of data points and images into a large data lake and make the result useful to diagnostic work in radiology, but that simply didn’t work out. What do you think about some of the early stumbles, and what’s been learned from them?

My background is in quality and safety; that’s been the focus of my career. In terms of what’s been learned—what’s happened has occurred along with my expectations. What the field is learning in general is that all these tedious parts of managing organizations, managing complex IT systems, and integrating AI into the clinical environment, it’s all hard. And some people knew that. This is not sexy. Those of us who work in the environment—we’ve got our hardhats on, and we’re fixing problems every day. And I think people who hadn’t had as deep knowledge of how these operations work, are coming to that knowledge.

It's always been hard, and it’s going to continue to be hard; it’s doable. But it’s not going to be lightning-fast. This is potentially transformative technology. But the day after people had the idea to have self-driving cars, we didn’t have self-driving cars all over the world. So if you were to think logically about all the problems that need to be solved in order to integrate and automate this, with the quality and safety assured, and all the interactions with referring providers and patients, all of these elements mean a lot of moving parts; it was never going to be easy.

And it seems that the leaders of those patient care organizations that are having success aren’t trying to “boil the ocean,” but instead, are applying AI and ML to specific practical problems. Does that perception align with what you’re seeing?

Yes, I don’t know how else that would be done. Even a radiologist is trained one problem at a time. I don’t know why we’d think that a machine would take a different approach. And that will continue to happen. And every problem has to be defined, solved, monitored, and integrated, and someone’s got to monitor the problem-solving algorithm. So that’s engineering, and then incorporating an algorithm into a portfolio of algorithms. And eventually, we’ll move from the dozens to the hundreds and thousands of algorithms. And at that point, it will become systems management. If you step back and think about the evolution of systems—if you think about the normal process, what might otherwise take multiple decades, will happen in a matter of years. But there’s still a lot of work to do to systematize processes.

What should CIOs, CTOs, and other healthcare IT leaders understand about all of this?

There needs to be a realization of what the value is of the thing that we actually do in imaging. And the thing that we do is that we acquire images and interpret those images, and we provide meaning and context. Historically, the IT person’s job has been to get the images and data to humans to support them. Gradually, the automated systems will not just be about moving the information, but increasingly, processing the information in a way that gets you closer to interpretation, so at least measurement and diagnostic assessment. It will incorporate actual clinical knowledge and expertise into the IT systems. And we’ll need clinical partnerships and governance; and we’ll need even more discipline around issues like quality control. And there’s a lot of tedious work to do. But as these are created one at a time, it now provides an opportunity for u to add value in ways that weren’t possible or efficient before. And that may not be aligned with the payment system, which may not keep up. So local leaders will have to decide to what extent they’ll move forward, even if the payment isn’t there yet. I would say there’s a good chance that they’ll wait until they’re paid for it. So they may have to view this as an investment early on. And the investments aren’t just hardware and software, for example; there’s management infrastructure, personnel, governance processes, that are all involved. And those will most likely happen one institution at a time.

So the reality is that that entire ecosystem will have to be developed, and one of the challenges for the local IT leader will be, at what point do I make those investments? Also, part of that is really stepping back and looking at where value is actually provided; and it’s not necessarily where it’s paid for. We’re in the information business, and valuing information is a hard thing to do. So I think that the main element of driving value is, to what extent is the information requested and used? So the better aligned you are to the users of information, the more you’ll know where your value is. And so establishing that close connection to your valued customers—so rather than doing a financial ROI, the exercise will be more about having some degree of understanding about the value being provided, and making the investment based on what people are saying that works for them an they need, rather than requiring that it be fully justified every step of the way.

It will take a while for the use cases to emerge, then?

Yes, that’s a classic diffusion of innovation problem. And that’s almost certainly how this information will diffuse. I think there will be local examples of where people do things well, and others where they stumble. But either the published literature is already rich, and the informal networks are very strong, and there are conferences at which ideas are presented. We’re really talking about the learning healthcare system. It will be fun as things emerge.

How do you see things evolving forward over the next few years?

We’ve hit on a lot of it here already in our discussion. We’ll have an increasing realization of the difficulty of these problems; we’ll have to solve the hard issues, such as figuring out where these algorithms should live. How do you monitor their effectiveness, and figure out how much of a difference they’re making? And how you justify them. Governance will be very big: how do you make sure that they’re being incorporated appropriately into care. It will increasingly become a problem of coordination and complexity, as we have autonomous algorithms on autonomous algorithms. As we see them become increasingly intertwined, the complexity will blow up. And how do you manage all that complexity, when you go from dozens to hundreds to thousands of algorithms? In a matter of a few years, we’ll start to see that. And you’ll start to see different inhibitors of the management of complexity. Right now, the main question is how we use the technology to solve problems; but each of the limiting factors, the rate-limiting steps, will become the focus over the next few years. And so we need to be patient: and realize this will be pretty monumental. In the course of the next ten years, will be pretty dramatic, the difference it will make. And we won’t want the humans ever to be pushed out. How to support the people rather than replace the people.

Sponsored Recommendations

The Crushing Weight of Healthcare Cloud Compliance & Security Debt: Perspectives & Strategies

Discover how to navigate the pressing challenges of healthcare cloud compliance and security. Join industry experts as they unveil key insights and actionable strategies to break...

Telehealth: Moving Forward Into the Future

Register now to explore two insightful sessions that delve into the transformative potential of telehealth and virtual care management solutions.

Telehealth: Moving Forward Into the Future

Register now to explore two insightful sessions that delve into the transformative potential of telehealth and virtual care management solutions.

How Gen AI is driving efficiency in the ED

Discover how Gen AI is revolutionizing efficiency in the Emergency Department (ED), enhancing patient care, and alleviating staffing challenges. Join Microsoft and Valley View...