Guest Blog: When Business Masquerades As Social Conscience

Oct. 25, 2016
So what of MedSec’s and Muddy Waters’ actions? Was it a public service? Was it financially motivated? Was it somehow Robin Hood-esque in its intentions to save the masses from the evil wrong-doers?

Based on recent news and the headline of this article, you are likely expecting this will be a discussion of the irresponsible actions of the MedSec and Muddy Waters organizations that outed St. Jude Medical by disclosing vulnerabilities in the medical devices they make.

Certainly this is not something I condone or support as the right path to an acceptable end, as it raised fears in the people using those devices, gave the criminal element harmful information and quite possibly caused irreparable financial harm to St. Jude before perhaps the issues identified were even verified. I would argue, however, that the fault for this situation has a much broader cast than the characters represented in this one episode.

For years we have known and debated what to do about the insecurity in medical devices. The government bears a significant responsibility for this situation, because unlike the other various actors involved; they alone have a mandate to take action to protect consumers when they become aware of a situation that places their safety at risk. The department of Homeland Security has conducted tests and provided irrefutable proof that multiple security flaws plague medical devices. Others have conducted tests and hacked different devices and publicly disclosed their findings at well-known hacker events like DefCon and BlackHat. Providers have repeatedly requested assistance from the Food and Drug Administration (FDA) as well as their vendor suppliers to help solve this problem before someone is harmed. But after all their appeals, all the evidence, and all the debate, nothing concrete has been done.

So what of MedSec’s and Muddy Waters’ actions? Was it a public service? Was it financially motivated? Was it somehow Robin Hood-esque in its intentions to save the masses from the evil wrong-doers? It doesn’t matter. At the end of the day it was irresponsible and it potentially put both providers and consumers at risk. What is ironic about this situation is that it is reminiscent of what we experienced during the ‘90s with social activist hackers who would reverse engineer or attack systems, find flaws, and then publish them on the internet creating havoc for anyone using those systems as they scrambled to find a fix.

All too often we hear, “we hacked the system and found X, we told the vendor who ignored us, so for the greater good we published it online to embarrass them and show that we care.” Whoa, you showed me you care by publishing it where? Even when you knew there was no fix available? This doesn’t add up. I can assure you that those of us who were engaged in the pursuit of protecting systems and data didn’t feel the warmth of their actions. So how does this happen?

The answer is simple, although many will tell you it is a complex problem. It has a name. It’s called ambivalence. The technologists and manufacturers who make the devices argue that fixing the problem can stifle innovation and increase the cost of development. It is not in their self-interest to change. The providers complain (but are helpless to affect the market) because they don’t have choices, and if caught between using a device that is insecure to save a person and not using it they will always opt to use and heal and accept the risk. We want them to. So it is not in their or our self-interests to force change by refusing to use insecure devices. The government studies, tests, debates, but doesn’t take action. Too much legislation is bad so it’s not in their self-interest, but sometimes the market needs to be regulated. So we have ambivalence, conflicting interests and lack of action which all lead to insecure medical devices and at-risk consumers. One of the negative side effects of ambivalence are those who become frustrated or who seek to take advantage of the situation and exploit fear to bring about change.

What is most frustrating about this issue is that everyone involved knows the problem, understands the risk, and knows what needs to be done. I’ve had countless conversations with providers on how to manage the risk. They are spending time and money to provide whatever level of protection they can with imperfect solutions.

I was recently asked in two separate conversations with CIOs, who also just happened to be medical doctors, whether we will ever see a solution. They had no faith that manufacturers would respond on their own citing, as it is not in their financial interests to do so. They restated their frustration with the lack of sufficient choices to affect change by selecting those products that were secure.

As I reflected on our discussions, I realized we were just contributing to the ambivalence around this issue. We need to stop. Cybersecurity IS a patient safety issue and cybersecurity controls should be required criteria for certification of medical devices that connect to a caregiver’s network or to a patient. Standards should be created for developing and implementing medical devices that assure the consumer their safety is being addressed properly. Medical devices should be required to pass independent tests as part of a certification process, before being approved for sale. Manufacturers should be required to provide ongoing support during the life of these devices to maintain their integrity as other software vendors do. And the FDA should issue standards for testing and certifying medical devices prior to approval for sale.

Unfortunately, we cannot eliminate undesirable behavior, but maybe we can affect the environment so that incidents like what we saw with St. Jude Medical no longer seem reasonable. Responsible testing of systems and applications is a critical component of good security, and there is a responsible way to go about it and a responsible way to manage what we learn from it.

Sponsored Recommendations

Enhancing Healthcare Through Strategic IT and AI Innovations

Learn how strategic IT and AI innovations are transforming healthcare - join Tomas Gregorio as he explores practical applications that enhance clinical decision-making, optimize...

The Intersection of Healthcare Compliance and Security in the Age of Deepfakes

As healthcare regulations struggle to keep up with rapid advancements in AI-driven threats like deepfakes, the security gaps have never been more concerning.

Increasing Healthcare Security Behind and Beyond the Firewall

Read how 5 identity security solutions can help you protect against these threats while improving user experience and reducing costs.

Improve and Secure Healthcare Delivery with Digital Identity

Get a deep understanding of how Digital Identity can help secure your healthcare organization while offering seamless access to your growing portfolio of apps and APIs.