On Feb. 19, Michael H. “Mac” McMillan, co-founder and CEO of the Austin, Tex.-based consulting firm CynergisTek, and chair of the Privacy & Security Policy Task Force of the Healthcare Information & Management Systems Society (HIMSS), presented an important update on federal healthcare data security mandates, as part of a webinar sponsored by the Toronto-based Asigra. He brings over 30 years of combined intelligence, security countermeasures and consulting experience to his position, in both government and private sector positions. He has worked in the healthcare industry since his retirement from the federal government in 2000, and has written widely about data and information security topics for a variety of audiences in healthcare.
Just prior to his presenting important information and insights about the final Omnibus Rule under the American Recovery and Reinvestment Act of 2009 (ARRA), around healthcare data and information security, Mac McMillan spoke with HCI Editor-in-Chief Mark Hagland about the crucial issues facing the healthcare industry at this time. Below are excerpts from that interview.
Please share some basic details about the recently published final Omnibus Rule, and how that revision will affect providers going forward.
The Omnibus Rule came out of HHS on Jan. 25, and one of the things it changed under HIPAA [the Health Insurance Portability and Accountability Act of 1996] was the final rule on breach notification, eliminating the harm provision, and replacing it with a new formula, which presumes a breach unless you can prove otherwise. So now you have to start from the position that I have a breach until I prove otherwise, for any incident where a breach might have taken place.
Mac McMillan
What will the penalties be for breaches under the final Omnibus Rule?
Penalties range from informal penalties such as a compliance action plan or resolution agreement all the way to fines and civil or criminal prosecution.
Are only hospitals and medical groups covered, or all providers?
It applies to all covered entities and business associates, anyone who handles PHI [protected health information].
What is the level of preparedness for this in the industry?
It’s not high. And I just received the results from OCR [the federal Office for Civil Rights] from the 115 audits that they performed last year, which I’m currently analyzing.
Where are the three or four biggest gaps where people are falling down?
One is knowing exactly where their data is; two, having conducted an accurate or thorough risk assessment with respect to where their risks are in the environment. Three is level of protection for things like encryption or DLP. And the fourth is having a good handle on the vendors that they work with, particularly as many hospitals move data out into the cloud.
Obviously, the first key element is having a good understanding of where your risks are. And that entails knowing where your data is created, where it’s stored, where it’s going in terms of where it’s being sent, etc., and understanding the various technical controls and processes around each of those operational uses of your information, to be able to identify where there is potential for exploitation or breach, and then addressing those things.
So probably the first thing I say to people is that the first thing you need to do is to conduct a thorough risk assessment to identify risks that need to be addressed. The second thing is really, truly understanding the resource commitment and having a plan for creating a secure environment; and most folks don’t do that well yet. They don’t think of security as a business program, they think of it as a regulatory requirement. Instead of something I need to literally plan for strategically, they think of it as something to worry about if I’m audited.
The third thing is having good accountability and awareness of your environment; security is built around preparedness, detection, and reaction, right? And we’re talking about the detection and reaction elements. Do we have the knowledge and awareness to be able to detect to things that could lead to a breach and aren’t consistent with our policies and behavior? And do we have the ability to react to those things and stop them?
So when you’re talking about managing breaches, you’re really talking about knowing where your risks are, and having the proper controls in place to prevent having those things happen, and having the ability to detect and react. It’s really those three things, and most organizations don’t do any of those three well.
Most organizations still don’t have a CISO [chief information security officer] yet, correct?
Yes, you’re right, most organizations haven’t planned for this and haven’t yet put the resources into it.
So staffing and resourcing are important, right?
Yes, when you look at breaches, the gaps usually come down to people, dollars, and technology. Technology itself—we need proper technical controls. And we don’t have the expertise, we don’t have people with the right level of expertise to perform the jobs we’re asking them to perform, and we’re still trying to do it internally. A lot of organizations will take someone out of IT or compliance or even nursing, with no experience or expertise. We talk to people who are filling the chief information security role in hospitals all the time, and it’s their first security job, and they’ve never been sent to training, have no certification, no background other than that they’ve worked in healthcare; and they’re just ill-equipped to do the job.
What do you think would be an ideal background for this role? One challenge, of course, is that the leaders at most hospitals don’t feel they can afford to staff the CISO position.
The bottom line is, we can slice this thing any way we want. But it’s pretty clear, and the surveys from the past five years, as well as our own experiences of the past 12 years working in healthcare, as well as these audits, all tell the same story. People are not approaching this responsibility as part of their program, are not resourcing it adequately, are not ensuring that they have the right people to manage it properly, are not investing in the technology needed to do it correctly. As a result, we haven’t seen any slowdown in breaches; the rate in fact is accelerating. And it won’t change until one of two things happens: either the federal government really tightens up, using fines, or two, the consumer responds in the form of civil lawsuits. We really haven’t had any of the civil lawsuits come to fruition yet. You let those things come to settlement, and have those settlements be significant, and that will get their attention probably faster than regulation would.
Now one of the things they did change in the Omnibus Rule was that the current enforcement process the OCR has had to use has been a very slow one. Typically, a breach has been reported, and the OCR has had to go through a laborious process and then a resolution; and the average has been 24 months from the time a breach has been reported to a final outcome, and everyone agrees that that’s too long. So one of the things they’ve done is to give greater latitude to OCR to go to outcome without resolution. In other words, in certain cases where a breach is reported, let’s say it’s a stolen laptop, and let’s say as part of the report, the organization admits that that laptop was not encrypted, and it had PHI on it. So OCR is now empowered to say, you’re admitting that the laptop was lost or stolen, wasn’t encrypted, so you’re guilty, here’s your fine. So that cuts it down to a more expedited process of fining people. That might actually change things as well.
In most cases, historically, you don’t even remember when the breach happened, it took so long for resolution. So you see the breach reported, and there are articles in the newspaper, and then you don’t hear anything for two years and then there’s a resolution, and OCR publishes the resolution on their website, which they’re required to do.
It’s far too long a cycle, then?
Right. But now let’s play that forward to where we are today, and let’s assume that same breach happens today. You will have a breach, and literally a month from now, OCR fines them, and the impact will be greater.
What would your advice be for CIOs in all this?
The first bit of advice that I would encourage them with—seek outside expertise in evaluating your program. People keep telling people, and I think it’s because of cost, that you don’t need anyone from the outside to evaluate your program, and that’s really bad advice. Because even in the federal government when I had a big staff, I always used outside entities to evaluate my security program; it’s like you, as a writer, having someone outside proofread your draft. Second, make sure you use people with credible credentials in advising you or helping you to address your security. You wouldn’t take your Lamborghini to a Volkswagen dealer! Use the right people to do the right job. And last but not least, understand that you are going to have to invest in security. You’re spending millions on IT, and yet you quibble over thousands for security; it’s ridiculous.
And the other element in this is that you need to change the discussion of security to a discussion of good stewardship of business assets. The average hospital right now if you look at the Moody’s study, the average hospital operates on a 2.5-percent operating margin, which means for every one dollar that goes off the bottom line, they have to bring in 40 dollars to replace it. So using the average cost of a breach reported in the surveys, which is $2.4 million, and if I have that, and it costs that, I have to bring in $96 million on the top line to replace that revenue, right? Start thinking about security from a business perspective. It costs money to do it incorrectly or badly. You’re a much more efficient manager or steward of the hospital’s resources when you spend $150 to encrypt that laptop as opposed to spending $2 million on a fine. Don’t be penny-wise and pound-foolish.