One question you often hear about artificial intelligence in healthcare is how long will it be before we see widespread adoption. During a recent panel discussion, John Halamka, M.D., M.S., president of the Mayo Clinic Platform, responded with an often-cited quote from author William Gibson: “The future is already here; it is just not evenly distributed yet.”
Halamka said right now at Mayo Clinic, 14 algorithms are run on every 12-lead ECG taken. “We print the results of all the predictive models on the ECG itself — typical ECG rate, rhythm intervals. We actually can tell you your ejection fraction. Do you have pulmonary hypertension? Do you have hypertrophic cardiomyopathy? Will you have A-fib three years from now? It's on the ECG itself, so there's no cognitive burden. There's no button to push, app to download, or distraction outside of the EHR.”
Halamka said that “over the next six quarters, we're going to see more and more of this kind of thing being brought into the EHR workflow itself, so that this human augmentation, which is exactly the right term, is just simply going to be there.”
The Sept. 26 meeting put on by Permanente Medicine also included Ed Lee, M.D., executive vice president of information technology and the chief information technology officer at the Permanente Federation. He also serves as an associate executive director at the Permanente Medical Group in Northern California.
Lee said he likes to refer to AI as augmented intelligence rather than artificial intelligence, because he thinks of this technology as being a set of tools that assist and augments a physician's ability to care for patients. “It's a lot like other ways that we support physicians with clinical decision support tools; AI just happens to be more advanced and more complex than other types of decision support,” he said.
Lee described a number of ways they are using AI at Kaiser Permanente, including use cases related to natural language processing, computer vision, and predictive analytics. “With natural language processing, we're analyzing e-mails from our patients and categorizing e-mails based on topics that the patient is writing about, and that gives us the ability to ensure that the most appropriate member of the care team is addressing the message and each team member is practicing at the top of their scope. And of course, this helps our patients get timely responses to their health concerns,” he said.
“We're also looking at computer vision, analyzing diabetic retinal images. We know that diabetes is a leading cause of blindness, and considering the number of diabetic patients we have, using a tool that helps us determine whether a patient does or doesn't have diabetic retinopathy can allow us identify retinopathy sooner, which gives us the best chance to prevent someone from going blind.”
Finally, in analytics, Kaiser Permanente has been developing a number of algorithms to help it stratify COVID-positive patients so that they can better anticipate which patients are at highest risk for developing more severe symptoms. “We also have our advanced alert monitoring program, which helps us keep an eye on our hospitalized patients in real time, and predicts which patients are at risk for deteriorating and may require being transferred to the ICU,” Lee explained. “This gives us the opportunity to intervene before our patients get sicker. And in the case of our advanced alert monitor program, we've estimated that we're saving hundreds of lives a year, and that's actually a fairly conservative estimate. With all of these examples, AI is augmenting the care of our physicians and our teams and when combined with clinical judgment, we create the potential for significant improvement in outcomes for our patients as well as efficiencies for our clinicians and our health system as a whole.”
Halamka noted that all this work begins with well-curated data. That involves EHR data, imaging, telemetry, and patient-reported outcomes data, organized longitudinally and then made available to investigators using what he calls an AI factory. “That's what Mayo did; it is very hard work. We actually took a couple of years to cleanse the data,” he said. “We then organize the data in such a way that it's not episodic, it's longitudinal. And we de-identified it so there would not be a lot of tricky IRB human subjects or privacy issues around the use of the data, but it is still stored in a secure cloud container with tools on top of it for all of our clinicians and all of our investigators at Mayo Clinic to create models.”
He stressed that creating a model isn't sufficient; one needs to test the model, validate the model, and use data even outside of the training set. “We've also established a variety of collaborations nationally and internationally to test the models to make sure they're fair, unbiased and useful for purpose.”
Lee noted that AI work on risk prediction is important because it can affect “not only individual patients with the output of what we predict; it can, in fact, affect entire populations and entire communities. That's the power of what we've been discussing here, where we can positively contribute to the health of many, many patients. With the limited number of resources that we have across the healthcare industry, we need to focus our efforts on how do we get the biggest bang for our buck. We know that hospital readmission is costly for the health system, and it's costly for the patient. Sepsis is a case where a patient can get severely ill and potentially pass away. How can we predict that type of outcome and prevent that from happening?”
Halamka was asked about data interoperability challenges and how they relate to AI progress. One of the challenges to be overcome, Halamka said, isn’t really technology-related. It is the ability to bring multiple organizations together to collaborate for the benefit of society. He described a few different promising models that are developing. One is an open source product from Verily called Terra. “It says Kaiser can put its data in this secure container; Mayo can put it in that secure container. Mayo and Kaiser can't see each other's data, yet we can do algorithm development across both. And there are four or five other technologies that are enabling this kind of collaboration without necessarily requiring centralization or giving up control,” he said. “So I actually feel very positive about where we're headed. We've done the interoperability where interoperability is needed for structured data and for the unstructured stuff, we have figured out secure computing technologies that foster discovery without compromising privacy or reputation.”
Lee was asked about a group at KP focused on quality assurance and algorithms. “Whenever we put out these tools, we need to make sure that they're still doing what we intended. them to do,” he stressed. “Maybe they will work for the subset of the population that the algorithm was developed upon, or maybe it works in the beginning, but as more data is collected and more information is available, the algorithm really should evolve and continue to develop. We're making sure that we're providing the most equitable care as possible and making sure the algorithms are still doing what they're intended to do. That really involves validation, revalidation, and then revalidating again. If you don't look, you'll never find bias. You need to make sure that that's built into the process in which these algorithms are used, developed and continue to be maintained. As we continue to develop many of these tools here, that's how we're looking at about it. We're making sure that it doesn't stop once things are put out there. That's just the beginning.”