Hospital readmissions costs have been estimated to cost the U.S. healthcare system in excess of $26 billion; yet reducing avoidable readmissions remains a very heavy lift for provider organizations across the country. Health plan leaders in many regions of the country have been working with hospitals, medical groups, and health system on this complex, multi-factorial problem, but the challenges are many.
Meanwhile, the leaders at the Durham-based Blue Cross Blue Shield of North Carolina, which serves more than 3.8 million members in that state, made a commitment over a year ago to leverage artificial intelligence (AI) to firmly and comprehensively attack the readmissions problem. After the organization’s CIO created an “Innovation Garage,” members of that “garage team” developed CarePath, which BCBSNC executives describe as “an advanced deep learning factory approach for creating predictive models that identify target populations at risk for hospital readmissions.” That approach, they note, “enables a more focused, personalized patient intervention that is implemented during the transition from the hospital to the home.” And the predictive analytical model they’ve built “applies a readmission risk score to members currently undergoing inpatient procedures,” with members further prioritized by probability of readmission, low engagement with their primary care physicians (PCPs); and being on eight or more medications.
Using these various data elements, the Innovation Garage team members have been able to predict the risk of readmission for individuals even as they’re being treated during their current inpatient hospital stays. The results? The health plan’s care management team’s engagement success rate rose within a year from 12 percent to 57 percent, and the BCBSNC people have established far more connected and productive interactions with the care management teams in medical groups and hospitals.
A team of professionals, led by Mitchell Quinn, built the CarePath factory, and then proceeded to build deep-learning predictive models. Using those models, BCBSNC’s Hospital to Home readmissions model was put into production by the plan’s healthcare case management team in December 2020. The pilot began with a team of nine RN case managers and two clinical pharmacists. BCBSNC leaders are planning to have firm numbers soon in terms of overall preventable hospital readmissions averted. For their pioneering work in this critical area, the editors of Healthcare Innovation named the Blue Cross Blue Shield of North Carolina team the number-two winning team in the publication’s 2021 Innovator Awards Program.
After their team’s designation as the number-two-winning Innovator Awards team, Healthcare Innovation Editor-in-Chief Mark Hagland interviewed ten of the BCBSNC leaders responsible for their health plan’s important readmissions reduction initiative. The ten leaders are: Ralph Perrine, director of IT strategy and transformation; Keith Duprey, senor strategi advisor; Jo Abernathy, CIO; Joe Bastante, chief technology officer; Mitchell Quinn, AI applied research scientist; Suzanne Jacobs, manager, Innovation Garage; Peter Blankenship, data scientist; Roberta Capp, M.D., M.H.S., vice president of clinical operations and innovations; Natosha Anderson, B.S.N., R.N., CCM, director healthcare development & program management; and Josh Gredvig, senior data scientist. Below are excerpts from that extended interview.
Can you tell me about the origins of this initiative?
Ralph Perrine: We always knew that predictive capabilities would make a giant leap forward through AI. We wish we could predict better; that’s the nature of insurance, predicting risk. That’s why we decided to double down on AI.
Mitchell Quinn: When I came to BCBSNC three years ago, I recognized that they had this massive data set. I didn’t have a lot of experience in healthcare, but had had long experience in big data and deep learning. I recognized here the untapped potential for building these models using these tools. So it began as, I was given the time to explore a lot of those avenues, and we tried things like predicting the next claim and so on; eventually, after talking to the business centers, the use cases started to materialize, around when and how things could be predicted, and who needed to know. Then we started working on more meaningful events, including 30-day readmissions. It’s a proxy measure in terms of the members who can most benefit from the care the nurses are able to give. The timeliness of the prediction is important here: we do things in real time here now, and the nurses reach out when patients are still in the hospital. It’s one thing to predict something well, but another to apply it.
Roberta Capp, M.D.: We had been using a segmentation algorithm based on risk factors since 2008, and risk factors continued to be added over time. That created a population that the case managers went after, but it was a much broader population than we could impact. So our goal was to use a technology and methodology to identify risk factors. We know so much more today about hospital readmissions; and only 20 percent of hospital readmissions are averted through care management. So if you simply say, ‘I’ll give you everybody who’s had a high risk,’ you’ll have a higher number of case managers than you can have an impact on. So the unique piece of this is to go to an extra layer and figure out who those high-risk members are, and matching them to case managers. And one thing that we’ve found is that, a year ago when we were using the segmentation model, when we went out and talked to providers, who are much closer to the members than we are, the overlap between our segmentation and their lists was only 5 percent, which was bad. What we were doing was going after the very high-cost members, for whom very little care management can really impact them. Now, we’re getting the rising-risk members, so the overlap is 60-80 percent.
Suzanne Jacobs: Mitch was doing really amazing work in developing this technology, but it took a strong partnership with our clinical management leaders get the results.
Jo Abernathy: For IT professionals in corporate environments, much of our work is prescribed for us. And we’re a highly regulated industry, with state and federal mandates, and a slew of upgrades and maintenance tasks. And sometimes you can be completely consumed by all that; but you’re not using your technology professionals to come up for air. We didn’t really have the construct in our company that allowed us to do that. So starting the Innovation Garage was really important. Huge kudos to Ralph and Joe B to know how promising AI was. We thought we ought to get a couple of people involved to work on this.
Can you speak to your department’s contribution?
Abernathy: The Innovation Garage is inside IT. We didn’t even have the budget for it. We’re actually still wrestling with paying for it, but we wanted to show people what we could do. It was the team that was so enthusiastic. But there’s some natural hesitancy around AI, and whether it’s worth the investment. We also had a niche team from the UK who came in, who specialize in the ethics of AI. So we had to get past the discomfort of it. But the team really sold the idea.
Joe Bastante: We actually met with the key people in every area. We said, let’s brainstorm: what can we solve with AI? We had many opportunities. But in some cases, we either lacked sufficient data or the appropriate use case. But it wasn’t so simple as simply coming up with a model and finding a partner; it involved working with the entire company for a full year.
Suzanne Jacobs: We actually had an instance where the technology was outpacing our ability to correctly use it. The initial reaction of many was to be concerned about the possibility of bias, in terms of the use of AI, so we had to address that concern, and to make the program clinically sound. So Ralph created the Care Path Medical Council. It’s not just the awesomeness of the model itself; we had to build the whole process around it.
What steps were involved in making the model clinically refined?
Capp: We were a part of the entire process in terms of the data elements; it’s not until you see the clinical data come through that you determine this is a good model for case management, or not. We know that care management can impact behavior change. Are members engaged with a primary care provider, or are they using the ED, or getting admitted over and over again? So we asked Peter and his team to pull data on how many visits members have had with behavioral health, with their PCPs, with EDs, etc.? Those who had engaged with their PCPs were already well case managed. Those not engaged were those we wanted to target; we also were able to determine based on severity what level of interventions were appropriate.
Quinn: It really was an iterative process. We could predict readmissions risk very well; we then started to refine that to determine when members were really suitable for engagement with us. Then we started to refine on when we need to engage with the member—what’s the most impactful event? This is where this program differs from a lot of traditional techniques. We do the prediction based on when the member is admitted; we produce a score for them. A lot of models on the payer side are applied after discharge, when it’s too late to create a highly impactful outreach. And we knew we could predict 30-day readmissions; could we engage them while they’re still inpatient for the things they were likely to be readmitted for? We did a lot of analysis around that.
Natosha Anderson, B.S.N., R.N., CCM: I think Roberta and Mitch have already provided you with the substance of this. Here’s the ‘how’: we set up pilots with a group of our clinical staff and nurses, to create that feedback loop. And Mitch and Peter were fantastic partners to work with on that.
What have been the biggest challenges so far?
Blankenship: Basically, I served as a case study of how to replicate this model on a different population hosted on a different database. So it was an opportunity to learn how flexible a care path could be, and how it could be rebuilt for different populations. One challenge was adapting the code for different cases to solve. Fortunately, Mitch was there for us. It’s a pretty state-of-the-art modeling technique, so it’s complex, and you really have to pay attention to the details, to replicate it on different systems. So it had to be rebuilt correctly on different systems. With any machine learning model, it’s a black box. In presenting it to the nursing staff, they have to trust it.
Quinn: The other challenge has had to do with going from a batch-mode, match-operation model, where there’s claims lag, to working in real time. We’re now producing lists at night for nurses to use to call in the morning. So the challenge is to make this more event-driven and more in real time.