Why URAC Sees Urgency in Creating an AI Accreditation in Healthcare

May 9, 2025
Shawn Griffin, M.D., URAC’s president and CEO, recently spoke with Healthcare Innovation about plans to launch an AI accreditation in third quarter

In April accreditation organization URAC announced that its Health Care AI Accreditation is on track for a third-quarter 2025 launch. An advisory committee, which is helping to develop the standards, includes experts from Verily, AHIP, Northwell Health, Aidoc, Pfizer, and other organizations. Shawn Griffin, M.D., URAC’s president and CEO, recently spoke with Healthcare Innovation about why accreditation in AI is so important. 

Healthcare Innovation: Could you talk a little bit about the challenges from a health system perspective as they're implementing AI and the role a standardized accreditation could could play here?

Griffin: I'm fortunate that I spent a couple decades as a chief medical information officer in a couple different health systems, bringing in technologies and implementing them, and getting physicians’ buy-in. There are a couple of things that make AI sort of different but sort of the same. What we are seeing is that AI is moving so quickly that some of the old rules need updating for the new things that are going on. Years ago, if I was going to implement an order set I'd have the physicians be involved. We'd look at the evidence and then we'd implement it. But that order set was going to be the same two months from now as it was today when I put it in. With AI, it's not the same tomorrow as it was today. 


With ambient AI for documenting visits, we have thousands of people today who are going into exam rooms, and there's another ear listening to what's going on in there, and patients may not be informed, and clinicians may not understand liability implications. I've seen organizations say they have thousands of uses for AI in the organization. But what are the guard rails? It's moving so quickly that you need to have some guard rails before this gets too far, before this moves too fast. We think that there's a place for us as a trusted, independent organization to come in with standards and say, what are the best practices around liability, oversight, patient consent, all of those sort of things, and that's what we're working on here.

HCI: From the health system’s perspective, would achieving the accreditation be about building trust with patients and other stakeholders? Or would the framework just give them a place to start? 

Griffin: There are organizations that are doing a great job of this, and there are other organizations that are sort of stumbling into this. We want to bring together those best practices. We think this is too critical for us to learn one mistake at a time. We really think that by getting people together to put out standards, to have an accreditation program that is guided by this, and have somebody independently check it, we think it is vitally important. We see accreditation as like an independent financial auditor. We're a quality auditor who comes in and checks to make sure that you're doing things in the right way, not to catch you doing something wrong, but to inform you and educate you so that you're doing it right every single day that you're doing it.

HCI: Are there other organizations developing frameworks for assessment of AI development? We've written about the Coalition for Health AI, which is developing independent assurance labs for machine learning models. But are there also other groups trying to create frameworks?

Griffin: I don't know of anybody else creating an accreditation. There are many organizations that are endorsing principles, and we are taking in those principles as being informed expert opinions. But it's one thing to say we endorse these principles; it’s another thing to say that someone can come in and check to make sure we're actually following them.

The Coalition for Healthcare AI’s initial efforts are around testing labs, and those testing labs are wonderful things. To me, a testing lab for an AI tool is very much like pre-market drug testing. You did your work in the lab. We're going to come in to check it in more of a real-world environment, bring in some extra data and validate that your tool does what it says. But that's just the start. To me, that's like saying we've got a sharper scalpel. Well, a sharper scalpel doesn't make a better surgeon.

There are probably 20 to 30 different organizations that are putting out principles. That would be great if this was going to get implemented in a few years. But this is going on now. With URAC, this is not the foxes guarding the chickens. This is an independent organization who comes in and says, we're going to create this program with multiple stakeholders to protect patients, to protect clinicians, and to establish best practices, and then test those best practices are actually being followed.

HCI: We've interviewed folks at UCSF who've created an impact monitoring platform for clinical care to monitor AI over time for efficacy, safety, and fairness, and they created a an executive director position to oversee the AI monitoring in clinical care. Is that the kind of thing we're likely to see more health systems create?

Griffin: Well, I've been a CMIO at a little community hospital and I've been in a CMIO role at a large academic medical center, and the resources are completely different. We accredit everything from a little corner pharmacy to a multi-state health plan. When we talk about best practices, we're going to say that you need to have clinical oversight, you need to have technical oversight, but we're not going to say you have to have a chief medical information officer or those sort of things, because the best practices for a little organization should be the same as best practice for a big organization, even if the capabilities are different. I've talked to a number of great organizations that are doing a fantastic job, and they should contribute to the knowledge base for how an organization like theirs does this, but very clearly, not everybody can follow an organization like that. 

HCI: Did the committee take into consideration when health systems are deploying AI modules from third party vendors? Does that raise other issues about how oversight should apply?

Griffin: Actually, when we got together our advisory committee, one of the first things they brought up is third-party vendors, some of whom may be technically very talented, but don't really work in healthcare. So issues of contract transparency, definitions of liability, those sort of things come into play. Also, we’ve got some states that are going to write regulations and legislation and other states that won’t, and many organizations these days operate in multiple states. Our advisory committee brought up that we probably need some best practices around contract language and liability shifting and transparency and those sort of things. Because some of these AI tools are coming out of your own lab, and some of them are coming from a vendor. 

HCI: So this is all set for launch in the third quarter?

Griffin: That is our plan. It's a whole bunch of work, and the committee is meeting very regularly. They are rolling up their sleeves. We're very proud of who we've got around the table, but I have a tight timeline on doing this. This is not something that can wait three years to do, and that's because, as I said, it's in use today without this sort of oversight. So we think that guard rails sooner rather than later is important.

 

 

Sponsored Recommendations

Discover how leading health systems are transforming patient care and staff workflows using agentic AI. Join experts from Allina Health, Duke Health, and SoundHound AI to explore...
Struggling with denials and staffing gaps? Learn the five essential claim processes you should automate to boost efficiency, reduce manual work, and increase your clean claim ...
How can Tegria help you enhance your Payer Platform capabilities and gain momentum with provider rollouts?
Increase your business agility with Pure's digital payer platform