At RSNA, Experts Warn of Danger from Malicious Use of AI Against Radiology

Nov. 29, 2022
At RSNA22, a trio of experts from different disciplines warned attendees about the potential dangers of AI being used to attack clinician-developed algorithms or produce false images in radiology

On Monday morning, Nov. 28 at Chicago’s McCormick Place, during RSNA22, the annual meeting of the Radiological Society of North America, a trio of experts from different disciplines looked at the prospects for cybercrime impacting the field of radiology and impacting artificial intelligence development in radiology and across healthcare, and issued dire warnings about the potential for harm to patients and patient care organizations, as AI adoption expands in healthcare, with criminals potentially turning AI algorithms against patient care itself.

Monday morning’s session was entitled “Artificial Intelligence and Cybersecurity in Healthcare, and the three experts who spoke were Shandong Wu, Ph.D., Benoit Desjardins, M.D., Ph.D., and Richard Staynings. Richard Staynings is a healthcare technology and cybersecurity strategist, thought leader, expert witness, and chief security strategist at the New York City-based Cylera; Benoit Desjardins, M.D., Ph.D., is a professor of radiology at the Hospital of the University of Pennsylvania (Philadelphia) and practicing non-invasive radiologist; and Shandong Wu, Ph.D., is an associate professor in radiology, biomedical informatics, bioengineering, intelligent systems, and clinical translational science at the University of Pittsburgh, and director of the Intelligent Computing for Clinical Imaging (ICCI) Lab at the Center for Artificial Intelligence Innovation in Medical Imaging (CAIIMI) at Pitt. The three speakers spoke in sequence.

Staynings began his remarks by noting that “Medical devices make up about 75 percent of connected endpoints in hospital and medical group settings, and are largely unmanaged by IT. And we’re relying more and more on them, and they’re adding more value to patient care, but all of these advances are expanding the attack surface,” he said. “Telehealth, work from home, and advances in AI-based radiological medicine, for example, are all expanding the threat surface. This is making healthcare a very easy target for hackers, as if we didn’t already know that that was true.”

Indeed, Staynings noted, “Providers are becoming more and more eager to attack healthcare organizations, and paying ransoms is making this a very lucrative business. A couple of messages: one is investing in adequate cybersecurity; and second is preparing for inevitable attacks. But the ransomware fueling is leading to an acceleration in the attacks. UCSF Health paid $1.4 million last year to retrieve their data; and Scripps Health was taken offline for three or four weeks this year, and it cost them $112.7 million in in lost revenue and direct restoration costs, excluding fines, notifications, etc., as well as the class action lawsuits from their patients, who had to drive up to Los Angeles for patient care,” he added. What’s more, we’ve all been following in the news what happened to Common Spirit Health recently; that attack affected millions of patients.” And, he added, “The cost of cybercrime rose 9.4 percent this year in healthcare, to $10.1 million in U.S. healthcare.”

Now, when it comes to artificial intelligence, it’s a split-screen situation, Staynings told the audience. “Obviously, there are a lot of good advances, driving patient outcomes, driving efficiency, driving many good things,” coming out of AI adoption. But one intense area of concern is the emergence of “deep fakes”—when criminals and others are able to create false images, text, or audio. Such activity will inevitably come to harm healthcare, he emphasized—a theme that Wu later expanded on.

Importantly, Staynings added, “Offensive AI is highly sophisticated and malicious. It merges into its environment, adapts, and is very covert. It’s being used extensively by attackers. Our security tools are unable to detect AI-based attacks today. They’re faster, more effective, and more subtle, and they understand context. An attack will insert itself using conversation into an existing email chain, using language a person you know might use. ‘Please open the attachment,’ etc. Traditional security tools are impotent. It’s already been used in the wild, in banking, for example. AI is being used to undermine fundamental aspects of trust, which leads us to question whether we can trust a medical record or trust what a colleague has given us.” In that regard, he said, “We will need to leverage defensive AI to help us.”

Next, Dr. Desjardins spoke. He reiterated important points regarding cyberattacks, emphasizing how damaging they’ve become in healthcare. “The SANS Institute did a survey of hackers, asking them how long it took them to extract information from a system,” he rerpoted. “They said it’s within five hours that they were able to extract everything. Contrast that with the fact that it takes about 250 days to detect a breach and 320 days to contain a breach. So there’s a massive, massive discrepancy there. We need much faster detection than what’s available.”

Further, Desjardins said, “Cyber attacks include the traditional network intrusion, involving reconnaissance, breaking the perimeter, pivoting to the private network, scanning the private network, and composing it. Then there are DDoS attacks, involving an army of computers (botnets) recruited by a central command and control center, sending countless requests to a system to overwhelm an IS. Another type of attack is related to malware. There are over 1 billion malware programs out there, and a half-million new ones a day being created, some are variations. Trojans, fake software. Four enterprises are hit by ransomware attacks every minute. And the biggest form of attacks is phishing attacks; and there are 3 billion phishing emails received every day. And phishing is the top cause of data breaches. Traditional defensive methods tend to be overwhelmed by the volume and speed of attacks,” he added. “It takes less than five hours to grab an organization’s data. But attacks are detected 235 days later and fixed 80 days after that.”

Importantly, AI can help in all this, Desjardins said. “It works automatically to detect attacks, including zero-day attacks. And it can deal with massive attacks at scale. Reusability. You train your models and don’t have to relearn everything from scratch. And AI programs have super-fast ability to detect abnormalities on the network really fast. Every AI model has four layers,” he continued. “First,there is adata layer with general datasets; then, a feature layer, with important features extracted from data; an intelligent layer, which can develop and combine models, and evaluate effectiveness, and an application layer.” And in that context, “AI can be used to train systems to detect abnormal network traffic patterns.”

Wu then went on to talk specifically about how cybercrime leveraging AI might actually be used to harm radiological practice. “The emergence of the creation of deep fakes is opening the door now to the creation of synthetic medical images—fake images,” he noted. “And there are several ways in which images can be manipulated to make them false. There’s imaging inpainting, in which one reconstructs missing regions in an image; and also object removal, image restoration, image-targeting, compositing, and image-based rendering. And it is at the level of the PACS [picture archiving and communications system] network, where data and images could be accessed and manipulated.”

When it comes to AI-developed algorithms inside patient care organizations and healthcare, there are three types of attacks possible, Wu went on to say. “There are ‘white-box attacks,’ in which the attacker has access to the data and the model parameters, both. They can design very powerful attacks. ‘Black-box attacks’ are those in which the attackers don’t have access to the model parameters; gray-box attacks involve a mix of elements in white- and black-box attacks.”

How to defend against adversarial attacks against patient care organization-generated algorithms? “We must secure the data, through at-rest encryption, in-motion security, digital signatures of scanners, and digital watermarks,” Wu said. “Machine learning algorithms can detect adversarial noises; through adversarial training, we can train AI models with pre-generated adversarial samples,” for example.

Meanwhile, when it comes to diagnostic images that have been manipulated to become false, Wu said that the potential danger is great: “We did a study: three experienced radiologists were largely confused by fake images for lung cancer diagnosis. Five experienced breast imaging radiologists visually identified between 29 and 71 percent of the fake mammogram images.” Given that alarming result, he urged that “We will need multidisciplinary teams” of colleagues, including clinicians, IT professionals, and AI experts, in order to fight back against the adversarial diagnostic image manipulation that could soon become a reality—a reality that could endanger the health and even lives of patients, and wreak havoc inside radiology practices, hospitals, and health systems.

Sponsored Recommendations

Care Access Made Easy: A Guide to Digital Self-Service for MEDITECH Hospitals

Today’s consumers expect access to digital self-service capabilities at multiple points during their journey to accessing care. While oftentimes organizations view digital transformatio...

Going Beyond the Smart Room: Empowering Nursing & Clinical Staff with Ambient Technology, Observation, and Documentation

Discover how ambient AI technology is revolutionizing nursing workflows and empowering clinical staff at scale. Learn about how Orlando Health implemented innovative strategies...

Enabling efficiencies in patient care and healthcare operations

Labor shortages. Burnout. Gaps in access to care. The healthcare industry has rising patient, caregiver and stakeholder expectations around customer experiences, increasing the...

Findings on the Healthcare Industry’s Lag to Adopt Technologies to Improve Data Management and Patient Care

Join us for this April 30th webinar to learn about 2024's State of the Market Report: New Challenges in Health Data Management.