Possibilities and Pitfalls: Leaders in the Trenches Look at the Real State of AI Now

March 9, 2022
On day two of the ViVE Conference, a panel of leaders working in the trenches to develop AI algorithms to patient care and applied research, discussed the challenges and opportunities

On Tuesday, March 8, on the second full day of the ViVE Conference, being held this week at the Miami Beach Convention Center, leaders in the world of artificial intelligence (AI) and machine learning (ML) held a discussion of some of the most important trends in the rapidly evolving AI/ML world in U.S. healthcare. Entitled “Show Me an AI-Enabled Early Warning Sign,” the session’s description read as follows: “Today, the potential for artificial intelligence (AI) to affect healthcare in profound ways is undeniable. Early adopters are experimenting with AI, seeking answers to safely and equitably scale technologies for the best outcomes. Healthcare executives are hungry for an array of AI applications to impact community health and prevent future pandemics. Join this team of high-stakes tech connoisseurs as they reveal the many contexts of AI application including predictive and prescriptive care delivery, use of real-time data, public health surveillance, and capacity of health delivery organizations to prevent, manage, and give early warnings in public health.”

Arundhati Parmar, editor-in-chief of MedCity News, moderated the panel. She was joined by John Brownstein, Ph.D., chief innovation officer at Boston Children’s Hospital and professor or biomedicine at Harvard Medical School; Alissa Hsu Lynch, global lead for medical technology strategy at Google Cloud; Balaji Ramadoss, Ph.D., founder and CEO, Edgilty Inc.; and Ines Vigil, M.D., M.P.H., general manager and senior vice president of provider solutions at ClarifyHealth.

After brief individual self-introductions were concluded, Parmar opened the substantive discussion by saying, “We’ve just gone through a public health crisis. With the benefit of hindsight, how can AI help to serve as an early warning system for public health?” she asked.

“My team’s work at Boston Children’s has been to support early warning systems,” said Brownstein, whose doctorate is in epidemiology. “We produced one of the earliest alerts about the virus coming out of Wuhan. There was a lot of data we could have been pulling from; there’s a lot we’ve learned in the past couple of years that would give us deep insights on early warnings. It’s exposed a lot of vulnerability. And we had almost no testing, and a broken public health system, at the beginning of the pandemic.”

Inevitably, Brownstein said, “What we’re doing now is great, but we’re seeing massive data gaps among states. There is the potential of AI to mine both traditional data sources and unconventional data sources, such as wastewater data. Providing visibility into what’s happening via data mining, could be incredibly valuable. Ultimately, the methods are one question, but data set integration is lacking. Without that substrate, our methods won’t mean much if we can’t get our data bases connected.”

“Can you give me a specific example of how AI is being used now?” Parmar asked. “The work we’ve been doing with Harvard has really emphasized the importance of AI,” Google Cloud’s Lynch said. “Google has invested in AI for many years, in everything from Google Search to Google Maps. But in healthcare, the first specific effort has involved applying AI to vast amounts of data to help improve our response to sepsis. Sepsis is an autoimmune response to infection, and one of the deadliest causes of death,” she emphasized.

Because of that, Lynch said, “At Google Cloud, we partnered with Emory University to create an algorithm: predicts onset of sepsis within 4-8 hours, with 80-percent accuracy. We’ve also been working on an algorithm for diabetic retinopathy. We developed an algorithm, with CE mark approval [a certification required for certain types of products sold inside the European Union]. It’s actually being used in areas where there are shortages of doctors, like rural areas of India and in clinics in Thailand. Expanding access to better care for more people is one of the great potentials of AI.”

“AI is itself a macro-trend,” Ramadoss said. “But the other macro-trend is this fight between centralization and decentralization. We’ve got an amazing amount of data, and [people and organizations are] creating centralized databases, but operationalizing that data in a decentralized way. We think of algorithms involving placing the right patient in the right bed. The data is there; the data is clean. Every health system has rich data. We apply about 40 different features and functions.” to AI work in his company. That’s the input side. On the output side, the same thing: how do you apply AI to discharge patients. AI is one tool in your tool set. In the discharge process, what are the eight things, and how can you sequence those eight things? Elisa, your example about sepsis, involves the same thing; the same thing is true when it comes to heart failure, palliative care. How do we zoom out and zoom in.”

A perfect example of how leveraging AI could improve patient care, said Vigil, who has a background in both medicine and public health, is applying algorithms to the task of estimating the potential chances of survival versus death, based on whether a patient might agree to a particular surgical operation. “Surgeons will often tell patients, your risk of death is less than 1 percent,” she said. “But inevitably, that 1 percent is based on broad averages; what if that’s not actually your personal risk of death? I’ve used machine learning and AI to deidentify unique characteristics of individuals in order to identify potentially higher risks for individuals.” Further, she said, “I’ve also used social determinants of health” to help gauge individual patients’ risk in specific situations, “because sometimes, the highest risk for a patient will be lack of caregiver support in the home, for example. So AI and machine learning can be used in very practical ways.”

Low-hanging fruit cited

“There’s a lot of low-hanging fruit in terms of when a patient might be admitted, or their length of stay,” Brownstein said, “and all those elements are being applied now” through the use of algorithms. “We also have an algorithm being applied to radiology to predict normal versus not-normal images.” That said, he emphasized, “a lot of work” is required to develop all such algorithms. “”Just being able to get the data that represents the patient” can pose significant challenges. “We’re a dual-EMR system with both Cerner and Epic. For clinical care work, it takes a long time to develop the algorithms, whereas for operational efficiency, it’s easier.”

“This stuff is really hard,” Vigil said. “To expect the average clinician to keep up with material in the literature and then ask them to trust an algorithm, make a decision for that patient, and then rely on what they’ve said,” is very challenging. Indeed, she added, “Making data practical, usable, and trusted, is a huge issue” for practicing physicians, and is one of the barriers that must be overcome in order to make the use of artificial intelligence algorithms a daily reality in patient care.

On the other hand, Vigil noted, “I sit here as a huge proponent of data in these settings. An article published by Harvard [University] in 2017 stated that the half-life of medical knowledge is now 18-24 months. There’s no way that clinicians can keep up” with constant advances in medical knowledge; “they need data to do their work. But I want to crack the nut around building trust, and the key is transparency.”

Compounding the trust issue among physicians, Lynch noted, is that “The reality is that many organizations don’t have large data scientist organizations, and they don’t know how to build a machine learning model, which is why they come to us. So that’s why they start with operational issues. And as you mentioned, John, the clinical pathway development path can be very long. And people are still learning about software as a medical device.”

Parmar turned to Ramadoss and said, “A moment ago, you said that the data is there, and that it’s clean. Yet I’ve heard the term ‘data janitor’ multiple times. So what did you mean by your statement?”

“If you peel the health system, you’ll find good batches of very good data,” Ramadoss replied. AI “models often don’t work. But when you peel the onion of going into a specific patient placement or condition, the data can help clarify. And using those [data sets] will create trust—but you need to get to those small nuggets of data clarity.”

Turning to the subject of health equity in relation to the development of AI algorithms, Arundhati asked the panel, “Can you talk about how you are pushing to get good-quality, diverse data, to promote health equity?”

“At Google, we absolutely recognize that data sets can reflect, reduce, or reinforce unfair data biases,” Lynch said. “So we have developed responsible AI principles that have been adopted by other organizations. And we’ve developed an AI and machine governance process. And here’s one example: for our new Pixel 6 camera: that team wanted to develop a more equitable camera. Pictures a big part of how we see each other and the world, and historically, bias in camera technology has overlooked people of color. We partnered with ImageMaker who are celebrated and known for depicting communities of color. They increased the number of images of people of color that were used to train the camera technology. And they were making improvements in a feature called RealTone, with improved face detection, which is better at recognizing different facial tones in images. And auto brightness is an issue, and even stray light tends to wash out people of color. So, the Pixel 6, everybody go out and get it!” she said lightly, with a smile.

“We’re still not there yet, with the systematic collection of data at the person level, and applying AI and machine learning to it,” Vigil said. “I work with ClarifyHealth, which is working with all sorts of types of data, and is putting together a data set that you can start to use. And the statistical significance it requires to derive meaning from data, requires a very large data set.” The work to truly come to understandings about what the data means, will take a while to mature, she said.

“Obviously, AI can help expose inequities, and can be used to fill in the gaps,” Brownstein said. “Our team runs vaccines.gov, a nationwide data set. And 95 percent of the population lives within 5 miles of a vaccination site, but there are vaccine deserts, where about 30 million people live too far away. And as with everything else, the pandemic has exemplified existing issues. So in addition to vaccines.gov, we launched VaccinePlanner with What’s His Name’s group (checklist guy), and we’ve been mapping vaccine access. And using AI tools, mapping out for local health departments where to place vaccination sites.”

With regard to moving forward to improve the health of communities, Vigil said that “It is extremely hard to convince a clinician to do something when you don’t provide transparency. You leave someone like me to translates data for clinicians, with both hands tied behind my back, if I can’t tell you the why or the how or the what. To impact populations to achieve improved health outcomes and cost, without improved transparency, is extremely hard.”

And, responding to that statement, Arundhati asked, “What counts as transparency?”

“Not all clinicians are trained to work with and use data,” Vigil replied. “It is very difficult to have that team of data scientists, researchers, analysts, and clinicians working to translate data into action. And when I first got involved in AI and machine learning, it was important to understand the difference between human-guided work, and not. It’s important to have clinicians involved in the team. And the transparency comes in two ways: one, be transparent about your calculations, about your biases. Start with knowing how big the data set that that algorithm was trained on. What’s big enough? Clinicians want to know, to be responsible advocates for patients.”

Ultimately, Vigil said, “We’re in this wonderful moment in healthcare where we really have the potential to make clinical practice easier for clinicians. But it takes a village to deliver care. And how can we use any form of workflow, data, or insight improvement that can ease the burden on care teams? If we can help take a challenge off their plates, there will be a lot of room to practice excellent medicine.”

Sponsored Recommendations

How Digital Co-Pilots for patients help navigate care journeys to lower costs, increase profits, and improve patient outcomes

Discover how digital care journey platforms act as 'co-pilots' for patients, improving outcomes and reducing costs, while boosting profitability and patient satisfaction in this...

5 Strategies to Enhance Population Health with the ACG System

Explore five key ACG System features designed to amplify your population health program. Learn how to apply insights for targeted, effective care, improve overall health outcomes...

A 4-step plan for denial prevention

Denial prevention is a top priority in today’s revenue cycle. It’s also one area where most organizations fall behind. The good news? The technology and tactics to prevent denials...

Healthcare Industry Predictions 2024 and Beyond

The next five years are all about mastering generative AI — is the healthcare industry ready?