He also feels protecting the integrity of the infrastructure is only part of the problem: "It's not only technology; it's the incidence response team. If you do a postmortem after a loss of power, it's an opportunity to find out if you've got a rickety platform that may need to be replaced."
Bob Pappagianopoulos, chief information security officer at Partners, believes there has been a shift towards understanding the importance of recovery. "When you got into budget discussions in the 1990s it was always, 'Well if we've got extra money we can do something about DR.' now, it's more like, 'We better do something just in case.'"
Partners' approach to DR involves two alternate sites that are both primary. "Most people have all their eggs in one basket, and then they'll have a backup basket," says Pappagianopoulos. "What we have chosen to do is to split it, so if we have 43 critical applications, half of our critical applications are up and running in one and half in the other. That way, if we have a disaster in one site, we only have to recover half our critical applications. We put a little logic into this."
Partners defines different levels of criticality: for clinical applications, it's two to four hours, with the goal to be even lower than that. For the next level of clinical applications, that are still clinical but not necessarily direct patient care, the critical time is four to eight hours; anything else is eight to 12 hours. Other applications not deemed critical are 24 hours and beyond. Each business unit has a business continuity plan that has to be updated every three months. "Most of the time there are no changes," says Pappagianopoulos. "You just have to make sure of that."
At Partners, e-mail is considered a critical application. The alternate data center has a subset of users including information systems and clinical people who will still continue to have access to e-mail in an emergency. "We have redundancy across the network, so it's more than likely, unless the hospitals themselves get hit, we should be able to stay up and running with subsets of users that have e-mail."
Partners also has an interesting alternative to the use of outside disaster recovery vendors: For their new hires in recent years, they've brought in people with disaster recovery experience.
For the physical plant in Boston, natural disasters definitely played a part in location for the data centers: one of its data centers is right on the water. The organization initially thought that an effective way to guard against hurricane flooding was to house the data center on the top floor, "Unfortunately, by putting it on the top floor of a 10-story building with a flat roof, you now have to worry about water coming down," says Pappagianopoulos. The organization solved this problem by using three membranes of roof structure above the data center. At Partners, as it does in so many disaster recovery plans, real estate played a factor in the decision to go to the 10th floor. "There's a lot of things that you think you have the answer to until you try to execute them," he says. "We put it on the 10th floor instead of the eighth because the 10th floor had the height we needed."
Pappagianopoulos says he feels that hospitals in the Northeast need water detection systems and backup power that can sustain the entire data center, not just a partof it. "A lot of people err in only doing a portion of it and they always get messed up. You need to make sure you invest a lot of money in power." Snow is another threat because it can impact the ability to get to the data center. In the event of a blizzard, Partners has the capability to run 80-90 percent of its data center functions remotely, and backups run automatically. "We don't have to worry about things like changing tapes, for example, because that's automated," he says. "We can connect to every server in the data center and also connect to all of our storage outside the data center. At some point, we want to get to a lights-out data center where you wouldn't need operating staff at all."
There have been lessons learned along the way. Pappagianopoulos' main advice? "As soon as you put in a critical application, you should build it in the disaster plan from Day One," he says. Partners made that part of its questionnaire and approval plan for new applications. "If they check off critical application, they automatically have to put a disaster recovery plan in place to get approval for the application. So we're stopping the spigot at the beginning instead of coming back after the fact."
Other lessons concern insurance and vendors. Insurance policies need to be carefully examined to make sure in the event of a disaster that there is enough coverage to provide necessary funds for recovery. Pappagianopoulos believes building great relationships with primary vendors is another key factor. "If there's a disaster and other hospitals in your area are impacted, you want to be high in the food chain to get replacement equipment. You can build a great partnership but at some point you want to be contractually protected."
He says that in hospitals, the biggest challenge is making sure the business owners understand the importance of a recovery plan because they have other things on their minds. "The biggest challenge is getting people's attention to do that — and then keeping it evergreen."