The Ethics of AI: Experts Sort Through the Welter of Issues
Can potential, and potentially damaging, forms of conceptual bias be eliminated, as the leaders of healthcare organizations move forward to create artificial intelligence (AI) and machine learning algorithms, in order to improve the effectiveness and outcomes (both clinical and financial) in U.S. healthcare? That was the fundamental question that was probed in a session entitled “Eliminating Bias and Inequity in Machine Learning and AI,” which was presented on Monday, August 9, during the AI and Machine Learning Forum, at HIMSS21 in Las Vegas. The session took place in the Bandol Room at the Wynn Resort, across the street from the Venetian Sands Convention Center.
Ryan Sousa, the chief data officer at Seattle Children’s, an integrated health system that encompasses patient care, research, and a foundation, joined Mainul Mondal, the founder of the San Francisco-based Ellipsis Health, a solutions provider in the behavioral healthcare sector (which has pioneered the first AI-powered, speech-based-vital sign to quantify and manage depression and anxiety symptoms), for the discussion, which was led by Karen Silverman, CEO and founder of the San Francisco-based Cantellus Group consulting firm and an outside counsel for HIMSS (the Chicago-based Healthcare Information & Management Systems Society).
Early on in the discussion, HIMSS’s Silverman asked Sousa and Mondal, “What are the biggest real-world consequences you worry about when it comes to bias around AI and machine learning?”
“In the case of Children’s the children,” Sousa answered simply. “If you’re someone who’s operating outside of healthcare, and you make a mistake” in terms of how you develop or implement AI or machine learning algorithms, he said, “you lose revenue or orders. But in healthcare, you’re going to cause harm. And how we treat a child in the ICU will have long-term impacts. So—it’s about trying to create the best possible outcome over time” for patients.”
“I agree,” Mondal said. “Ellipsis has one of the largest data sets in mental health. So there’s a lot of responsibility in that.”
“When you think about where the bias arises, is it in the data sets, in the algorithms, or in the minds of the people” collecting the data sets and developing the algorithms?” Silverman asked.
“Our mission is to redesign mental health from the bottom up,” Mondal emphasized. “But you have to define what mental health is. And so you gather folks who have diverse backgrounds and skill sets, and who have different perspectives. And look at depression and anxiety, which are on the rise. So you have to find ways to collect different data sets that could be of value.”
And, Sousa said, “There are so many people in healthcare who want to do the right thing. The issue isn’t people with bad intentions; people have good intentions. And about four years ago, we were having a lot of issues with burnout. So we started developing a model around the different ‘temperatures’ of different groups in the hospital around emerging burnout. And then, what do you do with the data? What if you found out that a supervisor is causing those issues? And even today, with the different predictive models, the ethics come up. But that moment about three or four years ago was a big eye-opener for us.”
Setting up ethical guardrails in order to achieve supportable outcomes
“When I’m not general-counsel-ing for HIMSS,” Silverman said, “I’m advising boards on risk and governance around risk. And you raise a good point, because now, we’re talking about data and actionable information, and how ethics can help set some guardrails, adjacent to data policy and to cybersecurity questions, yet at the same time, distinct. So what are the new kinds of risks that machine learning poses that are different from the other types of digital transformation?”
“They’re one and the same,” Sousa emphasized. “I don’t think you can talk about digital transformation without talking about analytics. Amazon wrote software that actually created algorithms,” he said; earlier in the AI and Machine Learning Forum, Sousa had noted the several years he had spent as a data analytics leader at Amazon. “So it’s a really good question: when you start talking about predictive analytics, it’s very different from in-the-moment or retrospective analytics. It’s about the future now; and so we have to fight cognitive bias. And it’s not so much about what the census will be three weeks ago; it’s more like risk management. If the census is above this point, what will we do? If it’s below this point, what will we do? It’s about playing out scenarios and acting on them.”
“In healthcare, a lot of the research is done in well-off communities,” Mondal noted. “So in healthcare, we have an opportunity to make sure that the data flowing into studies is of a certain type. It matters where you get the data from. So one of the things I always try to push for is fairness, and things like running sensitivity analysis. And what I haven’t seen so far is a study that works for specific demographics. So I’m hoping we see a lot more of that.”
“Are there particular tools or methods that you prefer to use to identify or mitigate bias?” Silverman asked. “I know there are a lot of different techniques that people are using.”
“You need a framework, and you have to care about how you label data,” Mondal stated. “Does something work out for the Hispanic community, for example? And who do we hire, and what are their backgrounds? And what’s the mission?”
“In every industry I’ve been in, it’s been the same story: you can hire the right people and use the right technology, and still end up not doing the right thing,” Sousa testified. “We’ve all studied cognitive bias. And having data scientists is important. But even your data scientists are at risk, from a cognitive bias perspective. So one of the things we’ve done is that we try to make it more about the system. If we’re doing a predictive modeling project, we do a checklist and develop a score. And my approach is very much, automation for everything until you can’t. And depending on the level of risk of bias, we might go to a review group that we have to analyze it. There’s a great book out there called Weapons of Math Destruction; it’s a great book that opens your eyes to issues.”
“And I’d add The Alignment Problem; it’s another great exploration of how good people can end up making bad decisions,” Silverman said.
“It’s very humbling; any of us could fall victim to that,” Sousa said.
“Yes,” Silverman responded; “and we’re often talking about—the instinct right now is to talk about technology that’s still being developed, and to talk about problems with the technology. The challenge will be when the technology is developed, but produces results that are still at odds with mission or vision. So those are decisions we’ll have to make in and around the issues; when the data is fine, but what we’re getting contravenes some other principle. Have you encountered that?”
“At Ellipsis, we produce confidence intervals,” Mondal said. “And we look at how we’ve performed using different models created by a diverse set of data scientists; and we produce a set of scores from that.”
“When we roll out the models, it’s just the beginning,” Sousa emphasized. “So you do your best work, but six times out of ten, you still get it wrong. And you have to do constant follow-up. At Amazon, early on, when we did constant recommendations, we were looking at cutting costs over time, and over time, the software got better than the people. And we ended up making a recommendation based on an algorithm, to a woman, to purchase books on spousal abuse; and she and her husband shared an account, so we ended up in a terrible situation. So you have to carefully examine your models for all sorts of reasons.”
Looking at an ongoing journey
The importance of thinking about an ever-unfolding evolution is important, Mondal said. “Healthcare is a continuous journey,” he emphasized. “And when you’re talking to physicians and providers, you have to examine the skill sets you’re using. And when you leave the physician or psychologist, often, there’s a black hole. And so you need engagement; and the provider is made more aware through that iterative process. So it’s a data journey.”
“And as things come more and more virtualized—it’s like in retail, things became a lot more virtual,” Sousa added. “In terms of the state of the art in retail, it was all about market analysis. We’ve all heard that people who purchase diapers also purchase beer. And we knew the whole experience of certain consumer types. But over time, it became a segment of one, everybody has a unique experience. And that’s where healthcare is now, and we have tools that can help you create an experience that’s truly unique for every patient; and it won’t happen every night. But it offers an opportunity. And like everything, it’s like splitting the atom: good things can be done with it, and bad things can be done with it. But it provides an opportunity to improve care.”
“Like everything, it’s a journey,” Mondal reiterated.
“But this is the patient trust element, right? What internal processes have you deployed to make sure you and your teams stay on the right side of the equation as much as possible,” Silverman said.
“I hate to keep coming back to Amazon, but there was a lot going on at the time,” Sousa noted. “And we decided to focus on promoting based on margin, not just consumer preferences. And the entire retail industry went nuts. And I know that our team is very focused on protecting patient privacy. So you’ve got to have the people and the culture, to do it right. And I do worry whether some of the non-traditional organizations getting into healthcare, whether they’ll have some of the same safeguards to protect patient privacy.”
“Yes, and it has to be an affirmative act,” Silverman agreed. “So you need some explicit governance standards.”
“Right on,” Mondal concluded. “Technology is not a problem to be solved. When we started Ellipsis, it was about hiring clinicians. And we interviewed tons and tons of patients, to understand their values. So it again goes back to culture.”