Qualified Health’s Kedar Mate, M.D., on Digital Governance for Generative AI
Kedar Mate, M.D., has made the leap from being president and CEO of the nonprofit Institute for Healthcare Improvement to becoming co-founder and chief medical officer of an ambitious startup called Qualified Health, which has received $30 million in seed funding to partner with health systems on the infrastructure for generative AI. Mate recently spoke with Healthcare Innovation about what health systems need in order to scale up their AI initiatives and the role Qualified Health expects to play in the ecosystem.
While he was at IHI, Healthcare Innovation interviewed Mate about efforts to overcome barriers to progress on health equity.
Here is a little more background on Qualified Health: Its investors include SignalFire, Healthier Capital, Town Hall Ventures, and Frist Cressey Ventures, along with participation from Intermountain Ventures, Flare Capital Partners, and healthcare and technology sector angel investors.
The company’s CEO is Justin Norden, MD, M.B.A., who teaches the Generative AI and Medicine course in the Department of Biomedical Informatics Research at Stanford Medicine. He was previously CEO of Trustworthy AI, a company focused on algorithm safety and trust, which was acquired by Waymo (Google Self-Driving). He was a partner at GSR Ventures leading AI in healthcare investments, worked on the healthcare team at Apple, and helped start the Stanford Center for Digital Health.
Healthcare Innovation: What are some of the key stumbling blocks that health systems are running up against that may require more infrastructure to support scaling up generative AI solutions? And what is it that Qualified Heath is building? Is it a software solution?
Mate: We are building core infrastructure technology at enterprise level — trying not to build a million point solutions, because we don't think that can scale inside of health systems. The thesis of the company is to try to build a one-stop shop essentially for healthcare delivery systems to come to so that they're getting high-value workflow assistant tools and workflow augmentation tools that are AI-augmented or AI-supported, that help generate real value return to the system and improve clinical service delivery and operational activities. And at the same time, they do that in a digitally governed, evaluated and monitored system so it's not being done in an unsafe manner. It's being done in a well-organized and well-managed and governed environment.
Healthcare needs this technology. It's not really optional. You can go at it slowly, or you can go at it quickly, I suppose, but that's just the pacing. It's an inevitability in the ecosystem, just like it is in almost every segment of our economy. Healthcare is noticeably slower in its adoption than other industries. And that's probably not wrong or bad, because the nature of the data in healthcare is private. It needs to be secure. Leakage of that data is very risky and would cause significant challenges. So there are a lot of good reasons why healthcare has gone a bit slower in its adoption of generative AI technologies. That’s part of the reason why Qualified exists — to create an information ecosystem that is safe, well-managed, well-governed and well-monitored, so that when healthcare systems deploy generative AI, they can know if there's a violation of a guardrail or if there's an issue with information leakage.
Another thesis is that automation is going to be fundamental to how we drive improvements in productivity. Your market competitiveness will be determined in part by how you adopt AI in the future. If you adopt it sooner and better than the system across the street, then you're going to have a better opportunity to corner aspects of the market in the future.
HCI: I understand Qualified Health is helping health systems with getting their data sets ready for generative AI and working with them on governance and monitoring. And then there is also an application layer. What kinds of things are we talking about in the application layer?
Mate: The way I like to talk about it is making sure patients get great care, and then making sure the providers get paid for it.
Patients suffer, unfortunately, significant gaps in their care. There are communication breakdowns, there are breakdowns around transitions of care. People get lost to follow-up. These technologies can scour the entire data archive of a health system and identify where patients have been missed and where care gaps exist, and then we can systematically work on closing those care gaps, so patients could get better care, and health systems can get reimbursed for that better care, and we can help make the care more seamless and better.
So those are the two ways that we've thought through our application layer — going specialty by specialty in terms of identifying the biggest sources of care gaps and when patient care is deviating from best practice guidelines.
HCI: Turning to the governance aspect, are a lot of health systems taking a traditional governance model that they had in place for other things and trying to apply it here. And is there a mismatch in that?
Mate: The initial approach has been to say “we have a governance committee for our technologies and now we have a subcommittee focused on generative AI or AI solutions writ large.”
We call that analog governance, for an analog era, for analog technologies. But we need truly digital governance. These AI tools demand digital governance because of their nature. The underlying foundational models evolve over time and that will cause either augmentation or degradation of performance of the specific application that is in use in your institution. And you might not know about that unless you are regularly monitoring the performance of those algorithms. You might have hundreds of thousands of hits to an LLM happening every day, maybe even millions of hits to an element in a bigger system. You won't know whether or not the performance is degrading over time with a committee that meets once a month, right? By the time you made a change, 30 days of bad stuff has happened in your system. So you need to have digital tools — a little bit like cybersecurity.
Cybersecurity is generally policing your information environment by keeping the malicious actors out. You need an internally facing cybersecurity to make sure that the internal actors are not using tools in ways that you don't intend for them to be using them — not in malicious ways, just because they don't know. They may be dropping things into ChatGPT — like a patient's history, without knowing that that exposes that history to public use in the future, and might leak PHI. So we need tools that are HIPAA-compliant, that protect PHI, and that by their design filter out bias. You also need digital monitoring tools that are constantly monitoring the performance of the 50 to 100 AI applications that are operating in your organization and making sure that they don't degrade in performance over time.
HCI: Are there some health systems that are doing some cutting-edge things on their own that go beyond that traditional governance model that you described? I’m thinking of UCSF Health as one example. They developed an Impact Monitoring Platform for Clinical Care, which is a real-time AI monitoring platform to evaluate efficacy and safety of AI tools deployed at UCSF Health.
Mate: I think there are a lot of organizations that have built platforms for their own internal purposes to try to evaluate their tools. UCSF is one of them. Duke came out with a paper recently in which they described their tooling around this. There have been a handful of these stories, but very few of them are designed for enterprise-level scale. They’re not intended to look across entities and try to work with multiple health systems. So they're built for purpose within their internal institutional environment, which is important. It's wonderful that they're doing that. They fully recognize the issue that we also recognize. Our hypothesis is that there are a lot of organizations that can't do that on their own, and even the ones that can do it on their own will struggle to maintain it forever. So Qualified exists both to service those that can't do it, but also to help those that can't maintain it over the long run in the future.
HCI: Is there a worry that smaller community health systems, physician groups, and FQHCs who want to deploy these tools will struggle keeping up with this kind of governance and infrastructure, and that will lead to AI haves and have-nots?
Mate: I do think there’s a big risk of us moving into having AI-enabled institutions and ones that are not. That is a challenge. And I don't think it's just a challenge of safety net institutions vs. non-safety net; I think it's a problem that exists even among health systems.
HCI: Sure. There are rural, two-hospital health systems that don't have a lot of money for IT infrastructure.
Mate: That's right. That’s part of the reason that we set Qualified Health up as a public benefit corporation — to try to address that under-addressed set of markets, so that we can create pricing structures that allow that to take place. We can try to build our company so that we can address that part of the market, in addition to the large urban academic centers and the for-profit institutions.
HCI: Is Qualified Health currently working with a handful of health system customers and piloting some of these tools?
Mate: Two handfuls. We've just signed our 10th customer, and these are pretty different health systems. Some of them are big, some of them are small. We have a group specialty medical practice. So it's not all hospitals alone. And we've got massive 30-plus-hospital organizations. We've got big academic centers. Geographically, they're all over the country. A little bit of that is opportunistic, but a little bit of that is by design as well, so that we can demonstrate that what we can do will be robust across a lot of different kinds of organizational environments.
HCI: Is there a role for multi-stakeholder organizations like CHAI or certifying bodies like Joint Commission and URAC?
Mate: Yes, we’re part of CHAI. Nirav Shah, one of our co-founders, is a big part of the policy wing of CHAI, and I was part of the National Academy of Medicine group that created the AI code of conduct, which was essentially trying to set a standard. We released that paper about a month ago to try to describe what safe, high-value AI governance would look like, and what guardrails should look like, and what a code of conduct should be. These are good and important efforts, and they will still need to be translated into meaningful technologies that can accomplish this job of digital monitoring, digital governance and digital evaluation of the AI tools that are being created. I think there will always be a role for regulation and standard setters, and then there will be roles for organizations like ours that help to translate those into active technologies that can be used by health systems to do the job.
About the Author

David Raths
David Raths is a Contributing Senior Editor for Healthcare Innovation, focusing on clinical informatics, learning health systems and value-based care transformation. He has been interviewing health system CIOs and CMIOs since 2006.
Follow him on Twitter @DavidRaths
