Steven Lin, M.D., founder and executive director of the Stanford Health Care Al Applied Research Team, is excited about the potential for artificial intelligence in healthcare, but he wants to see a greater focus on primary care.
Speaking during December’s Primary Care Transformation Summit, Lin noted that only 3 percent of FDA-approved AI tools are intended for primary care. The vast majority are in the specialty areas. More than 50 percent of the tools are in radiology, 20 percent are in cardiology, 8 percent are in neurology. “That’s where the field has really focused its research and development, and we are really missing out on the largest potential end user group for all of AI in healthcare, which is primary care,” he said.
Despite being in charge of an AI research group at Stanford, Lin stressed that he doesn’t have an AI background at all. “My perspective, going into this field has always been one of a frontline primary care physician, and also I have clinical operational leadership roles at Stanford. So it's through that lens of frontline primary care delivery that I'm looking at how all of this evolved over the last couple of years,” said Lin, who is also vice chief for technology innovation in the Division of Primary Care and Population Health at Stanford Health Care.
The challenge he faced as the medical director of the faculty practice at Stanford was that so many providers were burning out and quitting medicine altogether. “I was looking for solutions to keep them practicing, keep up the joy of medicine. One of the opportunities that came up was an opportunity to partner with Google on developing an ambient AI medical scribing technology for relieving documentation burden,” he recalled. “That was my first project in artificial intelligence and I thought, I don't know anything about AI, but if this is what AI is, if it's about making sure that we can deliver better care with happier doctors, then I'm all in.”
As Lin looked around at what was happening in the field of healthcare AI, he thought three things were missing.
One was a lack of focus on primary care, which actually delivers 52 percent of all care in the U.S. — more than all other specialties combined. “There was a mismatch there, and I thought that I needed to raise awareness of the importance of primary care in the development of AI,” Lin said.
The second one involved implementation. “There was a lot of cool stuff happening in the data science sphere, increasingly sophisticated models that were being built, but a general lack of focus on how to actually implement those models in real-world clinical settings,” Lin said.
The third was around diversity, equity and inclusion. Research activities are heavily concentrated around a very short list of affluent geographies and academic medical medical centers, and are not really involving the community perspective or the patient voice. “That's why I created my research group at Stanford to address all three of those,” he said.
His group is focused on many different applications of AI and it distributes them into various work streams. One particular category is around clinical decision making — tools that physicians use one-on-one at the bedside with an individual patient for an individual encounter. Usually these are tools that can help with diagnosis, for example, or to make better decisions around chronic management of a disease condition, for example.
Another broad bucket is around population health use cases, looking at an entire cohort of patients that a health system is responsible for and identifying those who are at highest risk or a preventable emergency department visit or hospitalization and trying to prove provide better quality care, but also lower the costs of care.
A third related application is around value-based care in risk-bearing arrangements where you really have to care about the quality of care for a population of patients, he said.
A fourth one is around transitions of care. “Whenever patients move from one healthcare setting to another — the floor to the ICU or outpatient to inpatient, that's where a lot of the gaps in quality occur,” Lin said. “So we are looking at tools that make that care coordination better, and to ensure that patients are not lost during those transactions.”
“The final big bucket that we work on is around reducing administrative burden for providers, involving, clinical documentation, chart review, prior authorizations — all of these clerical tasks that really shackle physicians to the EHR and strangle their practices,” he said. “We want to get doctors back to the practice of seeing patients, and these are the tools that help them do that.”
One emerging application of AI that Lin’s group is focused on is the increase in patient messages that has occurred since the COVID pandemic began. “At Stanford, for example, even though we have a relatively small clinical footprint in primary care every single day, we get 5,000 messages from patients, and they're all messages that we need to get to at some point in our day, but don't have time for, and the system is not built to handle,” he explained. “Here’s where you can apply a large language model like ChatGPT, for example, to draft replies to patient messages that physicians can review and edit and send back and hopefully save time and decrease the cognitive burden of having to respond to all of these messages on top of all of the work that they're doing already.”
Another key area is chart documentation. There are now dozens of companies that have produced ambient AI medical scribes that can listen in on these conversations that physicians are having with patients and generate notes for the physicians to review and edit. That saves a significant amount of time. “I think that most patients don't know that for every one hour physicians are spending in front of patients, we’re spending two additional hours in front of the computer doing stuff like reviewing charts and writing notes,” Lin said. “Chart documentation is the second highest burden in terms of EHR time on physicians, so being able to apply AI to do that allows physicians to get back to what they really love doing which is talking to patients and seeing patients face to face without worrying about all of that.”
Lin said that primary care physician organizations and stakeholder groups need to be talking more with the industry and academic leaders who are building these tools, and make them understand what the primary care use cases are, and why they're different than the specialty care use cases.
“If we're really interested in unleashing the power of AI for the broadest population of patients, we have to think about the the human-centered portion of things,” Lin explained. “I think it is also equally important, if not more important, that these tools have to take into account the very complex and ultimately social-based interactions that underpin all of care delivery and primary care. So we're a very relationship-centered specialty, and there is the perception that AI can get into the middle of that very important therapeutic relationship. So how do you design AI in a human-centered way that's not striving to replace a human healthcare provider, but rather to augment their abilities to take care of patients? How do you ensure that it's designed in a way that doesn't have the AI intrude and disrupt that very important, almost sacred relationship between the patient and their primary care provider, but do so in a way that doesn't increase the cognitive burden on the physicians and make it easier for providers to concentrate all of their energies on the patients in front of them, and not all of the noise that's happening in the background? That's what I mean by human-centered. And that's an incredibly important thing to consider pairing the human-centered design with the right use cases for the right problems.”
Another question that needs to be addressed is how you discuss AI-based tools with patients.
“There's no question in my mind that being transparent and being able to explain to patients that AI is involved with their care is important,” Lin said. “I think the real question is, how do you do that in a way that doesn't overly alarm folks, but is also it done in the spirit of transparency so that we're not hiding anything about how these decisions are being made? It’s a really interesting discussion. I think that we're gonna find out a lot more about the best practices of how physicians are communicating with patients with AI in the middle, as more of these use cases are actually emerging, and we'll be able to see exactly what is the best way of making sure that patients are informed.”
Physicians need to be informed, too, he stressed. “Sometimes they're not even aware that the AI is happening in the background. And so it really is not just incumbent upon individual providers and patients to have that dialogue but systems to think about what is their governance approach to AI? What is the policy around the use of AI in patient care? All of those are moving targets and actually the next couple of years I think will be very exciting for us to figure all of that.”