Where Is Network Segmentation Headed? One Industry Expert Has a Good Idea

Feb. 20, 2018
Impact Advisors’ John Robinson shares his perspectives on the new thinking around network segmentation, including around micro-segmentation and software configuration—and urges CIOs, CISOs, and other healthcare IT leaders to move quickly away from old tactics in this critical area

Among the numerous critical elements in the healthcare data and IT security area that is gaining more attention these days, and at more granular levels, is the set of issues around information system network segmentation. Network segmentation, as a concept, is far from new, including in healthcare; indeed, very broad network segmentation strategies have been an element in overall data and IT security plans at many U.S. patient care organizations for years. But the ongoing acceleration in cyberattacks on patient care organizations, including through phishing-driven ransomware and other malware intrusions—most often via phishing emails sent to staff members at patient care organizations—is compelling the discussion forward.

Specifically, industry experts are urging CIOs, CISOs, CTOs, and other healthcare IT leaders in patient care organizations to think about new, more sophisticated forms of network segmentation, including “micro-segmentation.” What is micro-segmentation? One industry expert, John Robinson, a senior advisor with the Naperville, Ill.-based Impact Advisors consulting firm, has a good handle on the topic. The North Ridgeville, Ohio-based consultant, who specializes in strategic technology consulting, has been with Impact Advisors for nearly two years. Previously, he had spent time at Dell Health Consulting, and prior to that, at the MetroHealth integrated health system in Cleveland, and at Catholic Health Initiatives in Denver. Robinson spoke recently with Healthcare Informatics Editor-in-Chief Mark Hagland about these issues, as Hagland interviewed industry experts for the upcoming Special Report on Cybersecurity. Below are excerpts from their interview.

When you look at the subject of network segmentation at a 40,000-foot-up level, what are the biggest issues, from your perspective?

From a senior management perspective, the biggest issues are, firstly, nobody’s really clear what it is. There are so many variations on the theme. There’s network segmentation, micro-segmentation, security segmentation, network partitioning. It’s a million names for essentially the same thing.

John Robinson

Among those terms, which one or two are best, or most understood, in your view?

The most understood, and the one that has the potential to become the standard term here, is micro-segmentation. But it’s a misnomer. It’s what I would call tentacle segmentation, really. Micro-segmentation has a nice ring to it. What that really is, is a technical approach that makes network security more flexible, by applying software-defined policies, rather than manual configuration.

How many IT security professionals in patient care organizations are still manually configuring their network segmentation?

The vast majority of healthcare organizations are still back in the manual configuration phase, trying to address rapidly evolving threat vectors with a manual methodology that just can’t keep up. You can’t type fast enough, basically, to do manual configuration in order to keep up with the threat vectors that are accelerating on a daily basis.

And the new wave in this area is software configuration, correct? What’s involved in software configuration, and how does it make a difference?

Creating a software-defined network allows you to apply policies, processes, and procedural rules to the traffic and data on the network itself, as opposed to manual configuration, where you are still manipulating software, but where you’re still essentially twisting wires. So this is not something that’s an alternative to manual configuration. You still need to electronically twist the wires, as it were, to keep your basic physical infrastructure chugging along, but you apply software definitions to that network so that you’re looking not at physical attributes of connectivity, but at the data flowing across that physical infrastructure, and applying polices and rules to that data, to make sure it goes where you want it to go, and doesn’t go where you don’t want it to go.

What are the key differences between software-configured and manually configured network segmentation?

With software-configured network segmentation, you can start with, I’m not going to let anybody in, and then loosen from there, whereas with physical configuration, you’re starting off allowing everyone to connect.

In other words, it’s like when a department store lets shoppers in one shopper at a time.

Right, and when they direct that shopper directly to a specific TV. However, there are some ‘gotchas’ there that have nothing to do with technology. You need to have, as an IT leader, a really good understanding of what you’ve got [in terms of information systems]. You need to know where all your users are, you need to know about all of your applications, and you need to understand who needs to connect to what. And that’s not easy.

In other words, you have to start with an overall strategy?

Yes, that’s right. In my mind, there’s no such thing as a tactical plan to address security at this level; it has to be strategic. You need to have this really intimate understanding of your environment, before you begin. Tactical responses are all, on the order of ‘X is happening, let’s do this.’ That’s like watching penguins on a beach: if something flies over the beach, all the penguins watch it fly over. Or if you’ve ever watched first-graders play soccer, that’s how most healthcare organizations respond to a security event.

So, put another way, you have to decide where your moats are going to be?

That’s what I would call legacy thinking about security. Let’s say you’ve got a hospital leadership team of 15 people, with all their areas of responsibility. If you were to ask those 15 people what’s most important, my guess is that you’d to get 20 answers. The reality is that importance is a perception. If I’m running the OR, then my surgery scheduling is far more important to me than purchasing. But if I’m running purchasing, well, you can’t run your OR unless I can buy you stuff. And if you take that approach, you end up with basically everything being important, and ultimately, nothing being important.

So rather than breaking the environment down by function, as you’ve just described, you basically need to organize the security environment—principally your data center—that’s where all your jewels are. And within the data center, rather than breaking it down into an applications VLAN, management VLAN, etc., put everything together in what I would call operational groups (finance, HR, etc.), and then within that grouping, create a policy-based environment to allow access to that group. It’s just a different way of thinking; it doesn’t change what’s in your data center; it’s a different way of structuring your data center.

And this is where people fall down—it’s really in understanding what’s in that data center. My bet is, if you were to come into any hospital and say, show me a list of the applications you run in your data center; they would actually struggle. They do not have the foundational components of having an application catalogue, or a configuration management database, that says who does what, when, and what they’re allowed to do. Until you do that, all these fancy security technologies are going to be difficult to implement, and you’ll spend a lot of money delivering a security solution, because you don’t really have a full picture of your environment, so you don’t really know when you’re done.

What are your thoughts and perspectives on how to handle the core EHR [electronic health record], in the context of these newer ideas about network segmentation?

Let’s say you’ve got a highly integrated EHR environment, as with Epic, Cerner, or any of the big EHR vendors. The challenge there is that you’ve put all your eggs into one rather significant basket. There are very good reasons to do that, but from a security standpoint, it’s a bit of a nightmare. So in order to provide the level of patient care you want to provide, via a highly centralized EHR, you have to allow users from all across the organization to access that functionality, which is these days usually controlled by a Citrix access layer or a virtualization access layer. And that’s where you can apply some degree of control, in that access or virtualization access layer.

That provides a policy-ish kind of layer between the users and the core, which says, if I know that this virtual terminal is in labor and delivery, being able to apply a software-defined policy, I should never see someone using that terminal accessing patient accounts. You do have a bit of granularity there. It’s not as good as it should be because you’re starting with a centralized EHR, but you can at least minimize the risk exposure.

In other words, essentially, you can break up the EHR, in the context of a segmentation strategy.

Yes, that’s right, you can. The challenge is, there’s no free lunch here. If you start to partition your EHR environment with an eye to security, then you create operational problems, because at the end of the day, you want all these bits of the EHR to communicate with each other. So that creates problems at the end of the line.

What is the ideal strategy for the EHR, in the context of all of this?

That’s really a good question. I’m not sure that there actually is an ideal. I think that what we have to come to is a grand compromise of operational sustainability and functional flexibility. It’s one of those things where you can’t have all of one or all of the other. You have to make it as secure as you can, while keeping it functional. Because total security would mean pen and paper. But per your example of the hospital being down for weeks, that’s a management problem, not a technical problem. The technology exists to prevent that, by appropriate uses of backup, of business continuity strategies, and in making a commitment and investment to your core infrastructure to say, I know there will be vulnerabilities. Look at two core vulnerabilities of the CPU chips in the computer, the Intel, called Meltdown and Specter. In the end, you need to mature your approach, to realize that security is a business imperative, and not something that IT needs to do to keep the place safe.

Sponsored Recommendations

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...

Spotlight on Artificial Intelligence

Unlock the potential of AI in our latest series. Discover how AI is revolutionizing clinical decision support, improving workflow efficiency, and transforming medical documentation...

Beyond the VPN: Zero Trust Access for a Healthcare Hybrid Work Environment

This whitepaper explores how a cloud-enabled zero trust architecture ensures secure, least privileged access to applications, meeting regulatory requirements and enhancing user...

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...