At Cleveland Clinic, Embedding Data Analytics Into the Core Culture

May 1, 2015
Eric Hixson, Ph.D. is helping to lead a group of data professionals who are deepening and broadening Cleveland Clinic’s journey into strategic data analytics work to support clinical performance improvement

Eric Hixson, Ph.D. is senior program administrator in the Business Intelligence Department, which operates within the Medical Operations division at Cleveland Clinic, the integrated health system based in Cleveland. Hixson is a leader in a team of about 85 data professionals, who perform numerous functions for the entire Cleveland Clinic. Hixson and his colleagues do data warehousing and data management; and they manage a structured data repository, called the Clarity Repository, which sits behind the electronic medical record, and facilitates data mining, data acquisition, and report generation from information originating in the EMR.

Hixson spoke recently with HCI Editor-in-Chief Mark Hagland regarding some of the current work that his team has been engaged in. On May 19, Hixson will deliver a presentation entitled “Analytics Strategy: Enablement, Innovation, Transformation,” in which he will discuss his team’s work at Cleveland Clinic, and its implications, as part of the Health IT Summit in Boston, sponsored by the Institution for Health Technology Transformation, or iHT2 (a sister organization of Healthcare Informatics under our corporate umbrella, the Vendome Group, LLC). Below are excerpts from their interview.

Your team is engaged in a whole range of analytics work for your colleagues at Cleveland Clinic. Tell me a bit about the Clarity Repository, to begin with?

Certainly. The Clarity Repository is a structured data repository that sits behind the electronic medical record. It permits data mining, data acquisition, report generation of information originating in the EMR, in a way that makes it easier on the analysts and also insulates the live online system clinicians are using from direct patient care from activity and analysis. It doesn’t slow it down. The EMR is optimized for direct patient care; the repository is optimized for reporting and data mapping.

What are the latest things you’re working on?

We’re where a lot of organizations are right now in that we have access to more and more information and data from multiple domains, information and data that are patient-generated, machine-generated, clinical operations-generated; and the organization is increasingly viewing that data as an asset. So what we’re increasingly challenged to do as an organization is to identify the value of that asset and leverage it to the maximum extent to really influence decision-making to provide care at lower cost and higher quality. So getting our arms around the breadth of what’s available and then identifying the use case of information—just because I have access to a dozen more domains of information, and we can analyze anything we can get hold of, is nice—but if I also don’t have a meaningful question to answer and a conversation or decision to influence, then it’s interesting academically, but has limited operational value.

So our focus really is understanding what we have, organizing our organizational structure to be able to provide that to users, either in a self-service-type environment, or via pre-processed reporting; but doing it in a way that will further their usage of information, not just putting it into a report. We talk about analytics execution, and this is really built on a data culture that’s been developing for a long time and nurtured by senior management. But our executing successfully is really focusing now on what we need to do. What are the important areas that need information? And where does data fill a gap, to influence decisions, direct or indirect, about patient care? So our focus really is on what the outcome is that I’m trying to influence, not so much on the product itself. So, what does it do, rather than what it is? That is our focus.

What have been the biggest challenges and opportunities on this journey so far?

One of the biggest challenges has simply been the scale and breadth of what we’re working on. We now have access to an incredible amount of information. So how do we manage it in a cost-effective and efficient way, so that when individuals ask questions, we can optimally facilitate their use of data, in as close to a self-service mode as possibility? We have different internal audiences, with different analytics capabilities. So even in the context of a self-service environment, we’ve really got a continuum of users, of sophistication of questions, and of the sophistication in the ability to use the answers. So we’re trying to do this efficiently, using tools that fit into their workflow. So that’s one large challenges.

Another challenge is actually a function of our success, in that the organization is collectively becoming much more data-savvy; they’re asking very good, sophisticated questions, that requires sophisticated analysis. And to keep up with that demand is a challenge, but it’s the right kind of challenge to have.

This seems partly like an IT governance question, and partly like an IT operations question, correct?

I would agree, but the two questions are inseparable. You don’t get good operation and use of data without good governance; and you don’t get good governance without good operational management. So I view those as a necessary pairing. And the execution flows down the organization from the executive level. And that vertically aligns: what are we here to do? Our organization is considering very significant expenditures in terms of infrastructure, and how is that optimally going to be deployed, and how will we demonstrate value in terms of hardware, software, and staff? And you don’t do that well without governance? And if you don’t have the expertise and people and tools to do it, you have governance, but nothing to deliver. So structure and strategy are inseparable.

What are a few of the most leading-edge things you’re doing right now?

I would say that is in the predictive analytics domain, where we are operationally implementing predictive models around key processes and outcomes. Examples include, in the patient care continuum, likelihood of patient discharge outcomes (mortality and post-acute needs requirements)—and integrating the likelihood of these outcomes back into the clinical workflow. So it introduces new facts in an environment and format where the clinical staff can incorporate those facts as they make determinations as to decisions on patients’ behalf, at the point of care or prior to it. And this follows on some of our prior work in operational process measures, where we’re measuring clinical processes in near-real time, where there’s an assurance that the information is being presented back to the clinical staff when they can act on it rather than later.

Our predictive analytics are following the same strategy, being operationally driven, and aligning it with the end-users’ workflow, making sure it’s meaningful rather than just interesting.

Can you provide a couple of examples of some of the output your team has been producing?

Sure; we’re predicting patient lengths of stay for patients anticipating surgery. And that’s allowing us to anticipate which patients will be higher-acuity, and also to anticipate what our hospital occupancy and census will be, up to eight weeks in advance. That’s been in use for about a year—the surgical forecasts have been in use for about 18 months, actually. And we’re forecasting total OR volumes eight weeks in advance, and using that data for operational reasons for staffing and changes in utilization, as well as anticipating increases or decreases in volumes and being able to plan for that, rather than being surprised by it after the fact.

And a second example of some of the work we’ve been doing  is in the area of elective hip and knee patients. We’ve implemented a model that a group here developed, looking at the likelihood for discharge disposition—for discharge to post-acute services. Again, up to 60 days prior to scheduled surgery, we’re able to score a patient based on pre-hospital factors, so that the surgeon meeting with the patient can take into account those factors, and can start to engage care coordinators who are case managers, to prepare for the post-acute services. So the activity is happening, and steps are put in place so that that’s occurring even before the patient’s day of surgery.

What have been some of your biggest learnings in those areas?

The biggest learning, which I will stress in my presentation in Boston, is that successful implementation is more dependent on the communication plan than on technical functionality. It is important not to simply engage clinical and administrative staff with a new gee-whiz analytic or reporting capability, but rather, it is important to identify particular needs that can be filled. Finding the appropriate information to do that, and then figuring out how we can do better is key here. Then the math takes care of itself, and the reports take care of themselves. When we’re really focusing on what matters, and people understand what value something is to them, and to their leadership, that’s what’s important. What value does a report have for a caregiver, and what value does it have to the person I’m accountable to, to my manager, my senior leadership? Now we’ve aligned incentives and execution, and we’re really focusing on what’s important. At the end of the day, you want it to be impactful, and that’s the critical piece.

Do you have any explicit advice for others engaging in this kind of activity?

I think engaging the clinical caregivers directly is a must. I don’t see patients every day, they do. And sometimes, we can come up with really novel solutions that aren’t necessarily that helpful. But when we focus on what the real problems are, not only are we not wasting their time or ours, we’re also developing relationships that build, so that as we address one issue, they’ll come up with other opportunities and challenges to address—so now you have those relationships, and together, with clinical and informatics and analytics people working together, it’s a team effort, rather than something that administration has foisted on us.

In the next couple of years, what will happen in your area at Cleveland Clinic?

In the next couple of years, we’ll increasingly be using data and analytics in an effective way; we’ll be better leveraging that as an asset; we’ll be providing better care. And ultimately, we’ll be doing a better job of meeting our employees’ and patients’ needs, in a more effective manner.

Sponsored Recommendations

A Cyber Shield for Healthcare: Exploring HHS's $1.3 Billion Security Initiative

Unlock the Future of Healthcare Cybersecurity with Erik Decker, Co-Chair of the HHS 405(d) workgroup! Don't miss this opportunity to gain invaluable knowledge from a seasoned ...

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...

Spotlight on Artificial Intelligence

Unlock the potential of AI in our latest series. Discover how AI is revolutionizing clinical decision support, improving workflow efficiency, and transforming medical documentation...

Beyond the VPN: Zero Trust Access for a Healthcare Hybrid Work Environment

This whitepaper explores how a cloud-enabled zero trust architecture ensures secure, least privileged access to applications, meeting regulatory requirements and enhancing user...