This past June, Iowa was hit some of the worst flooding in the state’s history, and right in the thick of it was the 370-bed Mercy Medical Center. More than 4,000 homes in the area had to be evacuated, and the hospital was forced to move 176 patients to nearby facilities. However, despite the fact that water levels rose so high that sandbags had to be piled up outside the doors, physicians were never left in the dark, as the facility’s network, EMR and communication systems stayed up during the entire ordeal. For Jeff Cash, Mercy’s vice president and CIO, it was the ultimate test for his staff’s preparedness.
KH: Going back to the electronic records, how were you able to ensure that clinicians had access even as staff and patients were being transported?
Our EMR, Meditech, has a physician portal that we’ve plugged on top of it that we bought from Patient Keeper. Although we have redundancy with Patient Keeper here in the hospital, there was a period of time where we didn’t have Internet service, and there was a period of time where we were concerned we would lose electrical distribution inside the facility because of the flooding. So Patient Keeper was able to reestablish our EMR off of one of our backups. They took a point-in-time snapshot and restored at their facility in Boston, and so we were able to allow physicians to continue to access our EMR, and they would have been able to do that even if we didn’t have any facilities at all.
Bear in mind, we had 176 patients we had to transfer outside the facility, and although we had some amount of paperwork and medical record information with those patients, their full history was still in our system. We needed to give physicians access to patients’ records, so we had Patient Keeper activate that portion for us from the facility in Boston. So we had a number of physicians that were able to log in to their site to take care of our Iowa-based patients by using our Web portal in Boston, interestingly enough.
That part of the backup and recovery worked extremely well. Those are the two areas that we thought we were going to need, and as it turned out, that was correct. Going forward, we’ll continue to always have provisions to be able to host a copy of your EMR as a hospital patient as well as our consumer portal offsite of our facilities, just in case we should go through something like this again.
KH: I think it’s safe to say that the next time this happens, you guys will be ready.
JC: I think we will. The other thing we did that we’ve gotten some press for is we are on the Qwest Sonnet ring here in the community. Qwest has the Sonnet ring that has entrances that come in at opposite ends of our hospital to bring in all of our data services — long distance and local telephone lines and Internet services. We have 20 outside clinics that we support as well as part of our hospital network, and of course the data that feeds them for their primary care EHR comes from the hospital. To stay connected with those communities, we thought it was important that we be on the Sonnet ring so that if we have a fiber cut somewhere, we wouldn’t lose access to the Qwest network.
Well, the one part to that which you might call the Achilles heel was that we have all the Sonnet equipment in the hospital at a below-ground level, and that ended up being down near our electrical switch distribution gear. So we started having the same problem with water coming in through the walls, etc in that area as well. We were afraid that we were going to lose that outside communication, so for an hour and a half we were down because we called Qwest and said we needed them to move their Sonnet gear and D-mart from the basement up to the first floor data center, and they were a little shocked.
They were here in a matter of 30 minutes with a couple of their engineers, and they actually went over the sandbags and waded through the water all the way down to the basement and found that they were able to disconnect everything and reattach all the fiber, bring it up one of our elevators, put it back in the data center and connect it back up just within a matter of a couple hours. And that brought us back on the network so we still had access to all of our outside communications. That was a tremendous effort by them to put it back together, and now that we’ve gone through that pain, we’re going to continue to keep that host in an above ground location.
KH: How else were you able to maintain communications?
JC: Well, another thing was that during that outage, we used cell phones. We have a contract with US Cellular; most of our staff managers and supervisors all have US Cellular cell phones with directories printed up. And we have a back-up set of 20 phones with dedicated telephone numbers that we’d pass out in the event that we would ever truly lose all of our telephones for whatever reason. Those numbers were already sent out to emergency support providers — police, ambulance, etc. — so they know how to get a hold of us a pinch in case they had to.
And we had US Cellular install an in-building antenna system in the hospital so that we have our own repeater system with their antennas. We put all that on emergency power about two years ago, with the expectation that if we ever had some type of disaster like this and we had to fall completely back to only cell phones as communication, at least we’d have a good antenna distribution system and they’d work, because typically you don’t get good coverage inside a hospital.
So that was kind of a saving grace for us for a period of time when we had to relocate that D-mart. At that point in time that was the only way we really had to get in touch with our offsite providers.
KH: That seems to be a common theme I’ve heard when speaking to CIOs who have been through a disaster situation, that cell phones have played a pivotal role. Was it a big priority for you to make sure you had sufficient access points inside the facility?
JC: Yes, and it was important that we did that, because some of our carriers had antennas that would have traditionally supported us but were taken out by the flood as well. So we wouldn’t necessarily have had very good cell phone penetration in the hospital if we hadn’t had those. Since then, we’ve created contracts with iWireless, T-Mobile, Verizon and AT&T to do the same thing, so we’ll have access to great cell coverage in the hospital.
KH: Do you have any final takeaways for CIOs based on your experience?
JC: The last thing I would pass on in terms of disaster recovery — and this is something we’ve always planned for with our physicians — is that with the Patient Keeper system, we have the ability for doctors to synchronize a smartphone or a PDA with our Patient Keeper system. So in the event that we should lose power and the doctors would still need to get access to EMRs, the data on our in-patients would still be available in their PDAs as part of their database. It’s kind of a backup alternative we have. As an example, if we’re going to do a system upgrade with our Patient Keeper and we have to take it down for a brief period of time to do the upgrade and bring it back up, most of them will synchronize with their PDAs against Patient Keeper before it goes down. So they’ll still have all the patient data that the physicians need while it’s down and they’ll just resynchronize again when it comes up.
But all of that same data already exists in our Meditech system, so we really have two fully independent and redundant repositories of patient data to withstand an emergency like this. We have it in Patient Keeper, we have it in Meditech, and then the third area would be synchronized to their PDAs if they need to. And then by having Patient Keeper in Boston, it’s an extension of our ability to deliver or serve up those records if we needed to in an emergency like we just did. I think we pretty much have that part covered.
KH: I think so too. It seems like your plan is extremely thorough.
JC: It is. A lot of physician portals integrate directly to your hospital information system, and that’s how they pull the data for the physicians, but we took a different approach. We were the development environment that Patient Keeper used when they created the concept of what they call a downtime database. So one of the things we talked to our physicians about when we put in the physician portal is, what were those things that are important to you? One of the things they said was, you’re going to make all this data electronic and you’re going to ask us to use a computer in order to use it, but when you do system upgrades on Meditech, you usually do them during the day. We’re in the hospital rounding, so how can you help us fix that? So we had Patient Keeper design a separate data repository of all the patient data a physician would ever need to have access to; it has a seven-year history, and we continue to build on that. We actually back-loaded it with a seven-year history out of Meditech and then we continued to build on that. What Patient Keeper does is, every five minutes it goes in and it pulls all of the changed information or new information from Meditech and then it pre-populates its own database. So they really are two completely independent systems that talk to each other. We could lose either of the systems and still not lose access to the full patient record.
Really, that was just a byproduct of putting in Patient Keeper and having them run their own database, as opposed to putting them in and having them pull their data real-time from the Meditech system. Because that’s what a lot of physician portals do, and that’s the way we originally bought Patient Keeper. You log into Patient Keeper and if you want to see patient data, it would reach into Meditech and pull it out, and then render it up to the physician on a browser. That was a nice solution, but it didn’t give us the redundancy we were looking for.
We have a portable database that’s fully populated with a seven-year history of patient data that’s no more than five minutes out of date for the physicians to use. If that was down, they could still go into the physician browser in Meditech and still get access to the same data even though it’s a different interface.
KH: So obviously, being able to customize a solution to meet your needs has been very beneficial for your team. What about business continuity — is that a big priority in choosing applications?
JC: Yes, we’ve always tried to do things that are self-sustainable in terms of business continuity. By creating this downtime database, we were able to then leverage a redundancy solution with faster performance, and now we have an Oracle database that we can run clinical and analytics tools against if we want for clinical information.
That’s one of the reasons that we migrated to the Cisco phone system, so we didn’t have to buy two PVX’s. We could have two $3,000-5,000 servers, one existing in each data center, and the system would continue to run. To support that, we did a complete rebuild of our internal data network in the sense that in all of our nursing floors, we now have fiber connections from all of the switches that exist in all of the clinical care areas back to both data centers, because that’s how the voice transmits for the telephone system now (over fiber cables and back through the network). We didn’t want to have a situation where we could lose one data center and not be able to use the other data center because the floors weren’t connected to it, for example. So we have kind of a triangular fiber network between both data centers and all of the endpoints running through the whole facility now.
We’re just better prepared all around.