During the Digital HIMSS21 Conference on Aug. 9, John Halamka, M.D. president of the Mayo Clinic Platform, described both the great potential for artificial intelligence in healthcare as well as the type of public/private collaboration that will be required to work through equity and transparency issues.
Halamka, who leads AI efforts at Mayo Clinic, said there is currently a “perfect storm” for innovation around AI and machine learning. Policy and technology forced are converging.
Yet issues around transparency of the efficacy of algorithms are important. “We need as a society to define transparency of communication, to define how we evaluate an algorithm’s fitness for purpose,” Halamka said. “One issue is that the algorithms are only as good as the underlying training data. And yet we don't publish statistics with each algorithm describing how it was developed, or where it's fit for purpose. You can literally look at the literature published in the last week and you will see articles in JAMA and JAMIA and the New York Times, all describing the need for addressing bias, ethics and fairness in AI. It's a top-of-mind issue.”
He made an analogy with nutrition labels on food containers. “Shouldn't we, as a society demand a nutrition label on our algorithms, saying this is the race ethnicity, the gender, the geography, the income, the education that went into the creation of this algorithm? And here’s a statistical measure of how well it works for a given population,” Halamka said. “That's how we get to maturity.”
But getting there will not be easy. Halamka described a recent meeting at Mayo Clinic with Fei Fei Li, an expert on AI from Stanford. “I asked her, hey, is there a tool I can just download from GitHub that will tell me about the bias of an algorithm? No. Is there a tool I can download that will describe the heterogeneity of the data? No. Is there an algorithm that could describe fairness in the way we treated a patient? No.”
These are things a consortium will have to define, he added, saying the FDA is probably not the right group to look at efficacy or bias. “I believe that this will fall to a public/private collaboration of government, academia and industry,” much like the movement around data standards, he noted. “I think it's going to happen very soon.”
He also said that to overcome concerns about data privacy or re-identification of data, we're going to see the development of federated machine learning approaches where, for instance, Mount Sinai can keep its data, Stanford can keep its data, Mayo can keep its data, yet algorithms are developed across all three institutions in a privacy-preserving and ethical way.
The key to maintaining momentum in AI is transparency, Halamka stressed. “So what did Mayo Clinic doWe developed an algorithm for detection of what I'll call a weak heart pump, low ejection fraction from a 12 lead ECG. “We can actually come up with a complex conclusion based on that simple test. Well, that was retrospectively validated on different data sets with an AUC [area under the curve] of 0.93, with 1 being perfect. So 0.93 is really a high measure of success,” he said. “We then did a prospective randomized controlled trial across 60,000 people and stratified it by race, ethnicity, age and gender to look at how did this algorithm actually perform in the real world, prospectively. And the answer there is our AUC was 0.92, so pretty good. And we published that in Nature Medicine a few weeks ago, saying to the world, look, here is how this thing actually works in the field. And that's the kind of transparency we need.”