On Monday, July 20, moments after he had delivered a keynote address to the CHIME Lead Forum-Denver, at the Sheraton Downtown Denver, Mac McMillan, CEO of CynergisTek, the Austin, Tex.-based consulting firm, sat down with HCI Editor Mark Hagland to talk about IT data security.
In his opening keynote address at the Forum, sponsored by the Ann Arbor, Mich.-based College of Healthcare Information Management Executives (CHIME) and by the Institute for Health Technology Transformation (iHT2—a sister organization of Healthcare Informatics through our parent company, the Vendome Group LLC), Mac McMillan had laid out in the clearest possible terms for his audience of IT executives the growing cybersecurity dangers threatening patient care organizations. Among the key areas of concern he had discussed were “increased reliance”; “insider abuse”; “questionable supply chains”; “device-facilitated threats”; “malware”; “mobility”: “identity theft and fraud”; “theft and losses”; “hacking and cyber-criminality”; “challenges emerging out of intensified compliance demands”; and a shortage of chief information security officers, or CISOs.
McMillan did reference the massive data breach at UCLA Health, which had occurred just a few days before the CHIME Lead Forum took place (it was announced on July 17). And that event was a starting point for his post-keynote exclusive interview with Hagland. Below are some excerpts from that interview.
You briefly mentioned the UCLA Health breach in your comments just now. Without over-emphasizing that one breach, what do you think we should all take from that incident, going forward?
While it’s not unusual to not have data encrypted internally—in fact, the majority of patient care organizations still do not internally encrypt—where we’re getting to with these breaches is that perhaps it is becoming time to internally encrypt data within the EHR [electronic health record]. Maybe two or three years ago, that wouldn’t have made sense. But maybe we need to change our thinking about that. Not knowing all the facts of that case, I would think about several things. One, how are we protecting the data, and architecting our environments to segregate patient information away from other information, and communicating information internally and encrypting it at rest or not? I think a lot of those historical thoughts do need to change.
The thought before was if you have the network segmented properly and you have your patient information within your data center, with physical protections, then you probably don’t need to encrypt it; you’re relying on other protective measures to compensate.
So you do think patient care organizations should look at encrypting patient records at rest within the EHR, then?
Yes, I do. I think we need to have a serious discussion about that. I think we need to also look at how we architect our environments. Do we need to put those systems inside an internal firewall so you only get to that system if you need to. We’re assuming that everyone inside is trusted. But let’s say I’m a hacker, and I compromise you externally and obtain credentials, and now the door is open to me. And here’s the kicker: why is it so easy for hackers to acquire credentials? We’re not even encrypting our passwords internally; we’re not encrypting people with elevated credentials; and we’re not using two-factor authentication with respect to people with elevated credentials.
And let’s say I’m a hacker and I get in and compromise your environment and start looking for passwords. If all your passwords are encrypted, now I’ve got to decrypt your passwords. If I’m able to do that, it’s game over again. But if there’s a second factor associated with that password—a second factor that’s a PIN or a soft token on my phone, then you’re still secure. Thinking about what hackers are trying to do, they’re looking to gain elevated privileges to allow them to make chanegs in the environment—turn things off, turn things on, etc. If they can’t get those elevated privileges, they’re done. They can’t exploit you, or it’s going to e incredibly hard to do so.
And guess what? If I’m monitoring behaviors and activity, sooner or later, you should notice something going on. For instance, when registry settings are changed—registry settings should never change unless someone with elevated privileges allow those to change. If I’m monitoring, I’ll notice that. Often, hackers will disable auditing activity. The minute someone disables auditing, or IDS (intrusion detection) gets disabled, that should be obvious. Whenever there’s a change in the environment, like a security setting, it should be registered, and somebody should be checking it out.
And part of that involves making one’s team and organization alert to patterns, of course.
I used to teach this: whenever you see your environment suddenly get better, you’d better check, because when hackers get in, as soon as they can break in, they fix something and install their own back door. They don’t want some knucklehead coming up behind them and ruining their party. So you got to your change control log and you’re looking at your monitoring tools, and if you can’t figure out who’s making changes, you’d better go check. Some of those other hacks that have occurred, most organizations when they talk about it, they’ll say, we noticed some activity but we didn’t think anything of it at the time. The point is that any anomalous behavior has to be noticed. Computers don’t turn something on or off. If it happens, it’s because somebody did it.
So in the end, I would say that we really have to do three things: we have to do behavioral monitoring, we have to move towards higher levels of encryption, and we have to act proactively and strategically, going forward.