How Medicaid Directors Are Thinking About AI
"The greatest risk of generative AI that I see is that we just don't deploy this in a way that meaningfully improves health outcomes for marginalized populations." -- Christopher Chen, M.D., M.B.A., medical director for Medicaid at the Washington State Health Care Authority
During an Oct. 25 National Academy of Medicine Workshop on Generative AI and Large Language Models in Health and Medicine, Christopher Chen, M.D., M.B.A., medical director for Medicaid at the Washington State Health Care Authority (HCA), spoke about the potential and risk of generative AI in the Medicaid space.
Chen helps guide clinical policy and strategy at the agency, and supports initiatives in health information technology, telehealth, quality, and health equity. He also serves as chair for the National Medicaid Medical Directors Network.
Chen began by noting that some of HCA’s health IT priorities involve getting IT resources to people who've been traditionally left out of digital modernization. In one of those initiatives, HCA is partnering with Epic on providing a state-option EHR for providers that were left out of HITECH funding, including behavioral health providers, rural providers, and tribal providers. “We’re also working on developing a community information exchange to support resource referral for health-related social needs, as well as integrated eligibility,” he said. “It was seen as a really important social determinants play for us in trying to get to a 20-minute online application for Medicaid, SNAP, cash and food assistance and childcare benefits for clients.”
“When I think about generative AI, there are lots of exciting possibilities to offer clients culturally attuned and tailored education, and help navigating and accessing what can be a really complex system of benefits,” Chen said. “There was a New York Times article that described how difficult it is to be poor in America and how much of an administrative burden we impose on our patients. For states, there's a significant potential to make government more efficient, and to access alternate sources of unstructured data to develop really meaningful insights on quality of care and use new tools to combat myths and disinformation.”
“But when I think about the risks of generative AI, it's a little bit overwhelming,” he added. “Medicaid clients are often not represented in these data sets that algorithms are trained on. As a result of barriers in accessing care, some of their providers are still on paper. And additionally, regulatory considerations that disproportionately affect the population that we serve are really have a stronger influence such as tribal sovereignty over data and privacy considerations around SUD data.”
For example, he said, there are meaningful risks to privacy for clients who have a lower level of health literacy, and also lack real or meaningful controls of their personal data. “Another concern that I have is how is this going to affect our ability to act as stewards of public dollars? Medicaid medical directors really take seriously our role to be stewards of public resources and adhere to standards of evidence-based medicine. We've seen the increasing prevalence of assertions of medical necessity on the basis of real or not-real studies. And that's a concern.”
Chen said he also is concerned that their status as public entities means that Medicaid agencies won't be able to take advantage of the potential of AI. “I think that there's an inherent tension between the nature of our work as a public agency, and the transparency that's required, and the black box in some of the algorithms in artificial intelligence, which are not auditable or explainable,” he explained. “And the greatest risk of generative AI that I see is that we just don't deploy this in a way that meaningfully improves health outcomes for marginalized populations. History is filled with instances where technology doesn't benefit all equally. I think there's often an assumption that a rising tide lifts all boats without recognizing that some boats are floating at the top and some boats are at the bottom of the ocean. And how do we intentionally address disparities?”So how is the HCA planning around AI? “We're very early in our journey, but at the Health Care Authority we have established an artificial intelligence ethics committee,” Chen said. “This work is led by our chief data officer, Vishal Chaudhry. The scope of our work is focused on our role as a regulator, purchaser and payer, putting our clients at the center of our work and complementing a lot of other efforts in healthcare. This committee is sponsored by our data governance and oversight committee and is tasked with developing and maintaining an AI ethics framework. We've been inviting experts to come speak to our group. We've been looking at the AI Bill of Rights, the NIST standards and focusing on the ethical considerations around equitability, transparency, accountability, compliance, trustworthiness and fairness. Our committee is chartered to grow artificial intelligence expertise so that the agency can create transparent and consistent rules for its use, advanced health equity and respect tribal sovereignty when it's applicable.”
Most of their experiences so far are with predictive AI, but they have seen some emerging use cases for generative AI. “Our committee also works really closely with our state Office of the Chief Information Officer. I just want to advocate for us as a community to work to solve the big problems that drive disparities in our health outcomes. We've had many, many innovations and technology across the industry over the last few years and yet as a country, our life expectancies have been decreasing as a result of crises and behavioral health and substance use. How do we target these tools to solve those big problems? We need to really meaningfully engaging patients in these kinds of conversations.”