Two executives from UNC Health in North Carolina recently detailed how their health system is defining and operationalizing a “responsible AI” framework.
Speaking during a Sept. 26 WEDI webinar on best practices in artificial intelligence in healthcare were Rachini Ahmadi-Moosavi, chief analytics officer, and Ram Rimal, manager of data science engineering at UNC Health.
“When we really started to think about AI and developing it was back in 2016. And that was a concerted effort to ensure that we can bring these kinds of capabilities and really advance the needs of our health system,” Ahmadi-Moosavi said. “While we might have partnered with other companies, other vendors to bring in AI technology, like Optum, and their computer-assisted coding technology, to help us, we also developed our own.” Use cases, she said, include case duration accuracy and better sepsis detection. “With that advent, however, the need for ensuring that we are doing that build responsibly and we are providing the best possible solutions to our healthcare system — whether we build it ourselves or we purchase it from a vendor — comes into question.”
Rimal explained why and how UNC Health developed and implemented a responsible AI framework. He noted that you can have problems of bias and discrimination built into an algorithm. “An algorithm just reflects what you have in data and if you have a really robust responsible AI system, you are going to ask certain questions and make sure there are certain checks and balances. You can always make sure that there is less bias and discrimination in the algorithm you build,” he said.
Having a framework is one way to make sure that the health system is doing its due diligence to make sure that AI is more and more responsible, Rimal said. “If we want to increase our efficacy and safety, we really need to think about responsible AI more and more. He said the lives of everyone coming to the healthcare system are valuable. “We really need to make sure that everything that we do is effective and safe,” he added.
Rimal is a data scientist by training, and he said one of the struggles they have is how to talk about AI models. “How are you communicating about the model so that the end user — it might be your patient, it might be your clinician, it might be your customer, understands what you did and how can you make sure that they can trust the model? If you don't have a robust process, it's really difficult to make sure that these things are followed consistently. If you want to make sure that these processes are transparent, you need the right framework.”
It is also important to know who was at the table when the decision was made. “If we build a model for sepsis, we would want to know who's going to use the sepsis model from the get-go so that we can hear their concerns and their question as we are building the model,” he said, “and to do that consistently, we need some kind of framework and responsible AI will help us to get there.”
Rimal said UNC Health needs to think about data security and the ethical duty it has as a healthcare delivery team. “For that, we need to make sure that our patients are really comfortable with how we are sharing that data, how we are applying an algorithm, and what kind of ethical considerations we have as we are building or deploying those models,” he said. “To win the patient trust, we really need to revisit some of our privacy, security and ethical rules even more, and having a consistent framework like responsible AI is going to help us.”
He spoke about the development of a custom sepsis model in Epic a few years ago as an example of their work. They sought to understand who they needed in the conversation. “We had expanded the scope from one team to multiple technical teams and clinical experts so that when we were building a model we had an iterative process,” Rimal said. “We not only had modeling considerations to make models better, but we also had issues from the workflow perspective.”
Fast forward to 2023, he said, and with all the challenges and all the conversations they are having around AI use at UNC Health, they decided to form a systemwide multidisciplinary group to make decisions around using AI responsibly. “We have experts from IT and finance, and we have an ethicist at the table,” Rimal said. “We have lawyers, human resources, hospital administrators, and clinicians. All of them are part of that conversation.” Bringing people with different voices and having those conversations in one place was really important, he said. Plus, any vendor that is going to implement an AI solution will go through the same framework. “I know that when we are talking with a vendor, it will be really hard to have this conversation about responsible AI,” he noted. “When we ask how they built their model and what is their training population, that kind of conversation sometimes goes into intellectual property protection. But we are committed to partnering with our vendors so that we have enough information to make the decision on the responsible AI front. And we have started that process.”
Ahmadi-Moosavi responded to a follow-up question about the data privacy and security related to AI. “Due to our collaboration with the research side of healthcare, we built out an enterprise data warehouse many years ago,” she said. “We have built in multiple layers of data protection and data security to evaluate how we take care of our information that's flowing from source system to all the consumption layers that we provide to our greater analytics community, in the same way that we have the right kind of access rights and provisioning to our data science team.”
The question about security, she said, is part of a much broader conversation around data management. “Our data governance council helps to address that. We partner with data security, as well as privacy and legal to ensure that all of those aspects are considered in totality when we think about the usage of data for anything like AI or any other outcome that we're trying to drive.”