Weathering the Storm

June 24, 2011
Lynn Witherspoon, M.D. Three years ago, Hurricane Katrina wreaked havoc on New Orleans, and particularly on the city's hospitals, forcing patients
Lynn Witherspoon, M.D.
Three years ago, Hurricane Katrina wreaked havoc on New Orleans, and particularly on the city's hospitals, forcing patients to relocate to football stadiums, and physicians to administer treatment without access to medical records. The destruction the city faced was well-documented and had a scarring effect on those living in the area. Fortunately, the storm also had a positive impact, as it was a much-needed catalyst for local healthcare organizations to do better next time.

This past September, when Gustav, a category 2 hurricane, bounded toward the city, Ochsner Health System was better prepared. The disaster recovery steps put into place throughout the past few years played a significant role as patients were treated and EMRs were powered throughout the storm.

“We had a real-life experience that taught us the value of getting our disaster plan in place,” says Lynn Witherspoon, M.D., vice president and CIO at Ochsner, a seven-hospital, academic health system based in New Orleans. “Aside from the one brief interruption of some services in our Elmwood Facility, things went on pretty normally,” he says. “The ability to evacuate two significantly-sized hospitals and turn around and do it all over again in the course of about five days is pretty incredible. We're very pleased with the performance of our platforms and systems.”

Disaster preparedness and recovery became a big priority after Katrina. Since that time, Ochsner has created a formal disaster command center, adopting the approach established by Washington-based Federal Emergency Management Agency. The health system also added several power generators and increased redundancy in the wide-area network, Witherspoon says.

This type of planning, says Jonathan Thompson, vice president of client services at Minneapolis-based Healthia Consulting, is critical and should be a model for all health systems. “I can't tell you how many times we've had an organization tell us they have a disaster plan, and it's a gigantic book created by a consultancy that came in and spent a lot of money to develop a deliverable that's really never been touched.”

Lately, however, the tide seems to be changing, Thompson says, as there is increased awareness around disaster preparedness and recovery (see chart). “It's not just an application or system or network infrastructure going down,” he says. “There's a greater risk of additional, bigger issues that can not only create system downtime, but affect where your people are, your building infrastructure, and the way you access the systems to recover.”

Two of the most serious issues CIOs must deal with during a natural disaster are loss of power and impaired access to IT systems — issues that are closely intertwined. In order to keep IT systems up for as long as possible, power is needed. And while most hospitals have emergency power capabilities, sometimes that isn't enough.

After losing a piece of its internal power-generation capability, which led to difficulty in keeping systems cool during Katrina, Ochsner added several generators during its recovery process. Since then, the IT team has also remained vigilant about refueling to ensure the generators could keep running. This step, Witherspoon says, proved pivotal during Gustav, as Ochsner never lost any of its systems in the main campus data center.

Jeff Cash
Keeping the data center running and ensuring that users can access IT systems is critical, as was demonstrated again during the recent storm, Witherspoon says. Ochsner, which has three un-interruptible power supply (UPS) units that protect the data center's electrical platform, experienced several dozen intermittent losses of commercial power in the computer room and was forced to shift to generator power. The UPS units worked reliably the entire time, says Witherspoon, who advises keeping up maintenance on the units and having at least a half-dozen spot coolers or portable air conditioning units on hand.

However, Witherspoon notes, it takes more than just sufficient power to keep things running. “In terms of the integrity of the data center and keeping systems up, the other piece that's critical is that end users are able to get to those services,” he says. Prior to Katrina, Ochsner had not Web-enabled its core clinical platforms, and as a result, clinicians had to be physically connected to the network. In the ensuing years, enabling Web access to any of its clinical platforms was top priority. “That proved to create a lot of versatility as both doctors and patients were evacuated and found themselves in unusual places where no one could access medical records.

“Since Katrina, we have extensively increased the redundancy in our wide area network,” which he says, “worked brilliantly during this storm.” Ochsner was able to collaborate with the three Internet providers it uses to provide a redundant backup network through which the IT staff was able to support the entire operation, including PACS.

According to Thompson, access to patient data is paramount during situations such as the one Ochsner experienced. “With hurricanes and floods, you're talking about having to pick up your patients and move them to another location. Then, IT becomes kind of an afterthought,” he says. “But the reality is, what is the plan to get to that critical data for those patients during that kind of a scenario?” he asks. “In that setting, it's less about the technology and more about the process, and identifying what are the top 20 things that I need to know about this patient.”

For the clinicians at Ochsner, being able to access platforms through the Internet enabled them to more effectively provide care, both during Katrina and Gustav. “The electronic record, and making it easily available via the Web, has really revolutionized our ability to respond to patient needs very quickly no matter where they find themselves,” Witherspoon says. “Getting our clinics up and running without having to worry about where the paper charts are has been hugely helpful in getting the services back online. If anyone still doubts the value of EMRs, they really need to understand that in these kinds of circumstances, access to patient care information can indeed be a life or death issue.”

Witherspoon also cites the management team's quick response, the twice-daily briefings that were held, and copious amounts of communication for making the experience much smoother.

Mercy under water

At Mercy Medical Center in Cedar Rapids, Iowa — which was hit with some of the worst flooding in the city's history this past summer — it was executive leadership and solid planning that helped keep operations afloat during the unexpected natural disaster.

Mercy, a 370-bed facility, is located about 10 blocks from Cedar River. But according to Jeff Cash, vice president and CIO, it is far from where they thought they would ever get water. “We kind of always thought we might get hit with a tornado, which is very real in our part of the country,” he says. Cash says that the hospital's proximity to the San Madres Fault made a minor earthquake also seem a more likely occurrence. “We sure never predicted that we'd have a flood here.”

However, the facility was hit — and hit hard — by flood waters, forcing an evacuation of 176 patients. And it wasn't just the flood waters that were the problem; Mercy also had to deal with ground water making its way into the hospital as well as pressure from the sewage systems. To make matters worse, all of this occurred while Cash was on vacation in Europe with his family. Cash was notified at 4 a.m. that the hospital was being evacuated. As he scrambled to get a flight back home, the team at Mercy took charge, opting to close the hospital to new patients and transfer existing ones.

“The reason they chose (to) evacuate was that, as the water continued to come into the basement area, we had lost power from the city and weren't sure when that was going to come back,” says Cash. With the water rising to more than 5 ft. outside the front doors, the staff was concerned that the water would get into the switch distribution systems, keeping the facility from accessing adequate electrical power. Luckily, the water had crested at that point, and did not end up impacting the switch distribution. But perhaps even more fortunate was the fact that Cash's team was able to manage a situation that no one could have anticipated.

“The fact that my team had enough equipment, enough leadership and enough planning in place to be able to get through this without losing access to any of our critical systems was a good testament to the planning we had done,” Cash says, who got back in time to lead the recovery operation. “It made me feel good to know that they're capable of doing that even without me here. For me, as the CIO, it's one of those things that's a once-in-a-lifetime opportunity, you hope, to (be able to) test what you've done and see if all is going to go well.”

One key component was the ability for staff members to maintain communications with each other, even on emergency power. They used badges from Vocera Communications and phones operated by Cisco Systems' Unified Communications Manager, an IP telephony call-processing system (both companies are based in San Jose, Calif.). The phones were operated through servers based in each of Mercy's two data centers.

Mercy has one data center on the first floor and one on the ground floor. While the latter was able to withstand the flood waters, the area around it was destroyed, prompting hospital leaders to evacuate and rely solely on the first floor facility. “The redundancy of having a call manager in both of our data centers is what kept our internal communications going,” Cash says, adding that the system allowed the facility to provide services despite losing a data center. “The call managers' portability allowed us to relocate departments after the flood on a very quick turnaround basis,” he says.

Another critical piece of Mercy's preparation came in the business continuity plan that had been developed prior to the flood. The plan, Mercy vice president and CIO says, “helped us prepare for a vast majority of relocations and further redundancy.” According to him, Mercy had run tests on the communication systems and call manager for fail over.

“One of our goals was to make our data centers as portable as possible, with the intention that if something like this came up or if we need to move a data center in the future, we'd be able to do that,” Cash says. Because Mercy's data centers are built in a modular fashion, the staff has tried to keep all of the cabinets self-contained by replacing its traditional servers with blade server technology. This, says Cash, will make it easier to house a smaller number of cabinets that can be transported if necessary.

“We moved to a fiber-based architecture for all of our network switching, and we've put in a large storage area network that's redundant between both of our data centers,” Cash says. In the event that Mercy's staff has to move a cabinet, all they have to do is remove the power and the network connection. Once the cabinet is moved to an alternate location, they would simply plug it back into a network connection and online capabilities would be restored, he says.

Steps like this, he says, should be in place at any facility, and not just those in geographical areas that carry increased risks of certain natural disasters. “I think you have to take the natural disaster component out of it and know that any disaster may cause you to either evacuate or lose a portion of your data center for a period of time, and you have to know how to prepare yourself for that,” Cash says.

Thompson stresses that it isn't just physical disasters or weather-related events that pose threats, but something as simple as a virus that could create widespread inability to access application data. “Every hospital organization should have a plan in place that's actionable, maintained and regularly tested on a quarterly basis,” Thompson says. “Having a redundant data center and the ability, from a technology standpoint, to cut over to a previous version or another site is going to resolve a lot of those issues,” he says. “I think I would say that where organizations falter is in the process in having an actionable plan that's at the forefront.”

Mercy has taken that a step further by ensuring that even the EMR system has a sufficient back-up plan. According to Cash, the facility's medical record system from Westwood, Mass.-based Meditech now includes a physician portal from PatientKeeper (Newton, Mass.) that is plugged in to the application. If Internet service goes down during an event, Cash says PatientKeeper is able to re-establish the EMR off a back-up server by taking a point-in-time snapshot and restoring it, enabling clinicians to access patient records.

“With the PatientKeeper system, we have the ability for the doctors who are taking care of our patients to synchronize a smartphone or a PDA to their system, so in the event that we should lose that and doctors still want to get access to EMRs, the data would still be available in their PDAs as part of their database,” Cash says. The system played a pivotal role in enabling Mercy's physicians to access the full medical histories of all 176 evacuated patients, he says.

“We've tried to do things that are self-sustainable in terms of business continuity,” Cash says. “By creating this downtime database, we were able to leverage a redundancy solution with faster performance, and now we have an Oracle (Redwood Shores, Calif.) database that we can run clinical and analytics tools against if we want clinical information.”

As both Cash and Witherspoon can attest, when it comes to disaster preparedness, every base must be covered, and facilities can never be too prepared in the event of an emergency.

With all the dangers posed to hospitals, it is critical that CIOs “develop a living, actionable plan that can evolve as products are implemented and updated,” Thompson says. “Disaster recovery has been and continues to be an afterthought. Of the organizations that say they have a plan, I would challenge them on the actionable nature of it and the ability to ask anybody in the organization, ‘Where is the plan, what's your role in the plan, and what do you do first?’”

Healthcare Informatics 2008 November;25(11):32-38

Sponsored Recommendations

Enhancing Healthcare Through Strategic IT and AI Innovations

Learn how strategic IT and AI innovations are transforming healthcare - join Tomas Gregorio as he explores practical applications that enhance clinical decision-making, optimize...

The Intersection of Healthcare Compliance and Security in the Age of Deepfakes

As healthcare regulations struggle to keep up with rapid advancements in AI-driven threats like deepfakes, the security gaps have never been more concerning.

Increasing Healthcare Security Behind and Beyond the Firewall

Read how 5 identity security solutions can help you protect against these threats while improving user experience and reducing costs.

Improve and Secure Healthcare Delivery with Digital Identity

Get a deep understanding of how Digital Identity can help secure your healthcare organization while offering seamless access to your growing portfolio of apps and APIs.