At Stanford Medicine Children’s Health, Ongoing Gains Leveraging AI

March 21, 2025
Natalie Pageler, M.D., the organization’s chief health informatics officer, shares about progress using AI

At Stanford Medicine Children’s Health, leaders are continuing to move forward to advance their work in artificial intelligence (AI) and machine learning (ML). One of the organization’s senior leaders helping to guide that effort is Natalie Pageler, M.D., MEd, the organization’s chief health informatics officer and a pediatric critical care physician at Stanford Medicine Children’s Health, and division chief for clinical informatics at Stanford Medicine.

Recently, Healthcare Innovation Editor-in-Chief Mark Hagland spoke with Dr. Pageler about the ongoing work that she and her colleagues have been engaged in in the AI area. Below are excerpts from that interview.

Just for context, about what percentage of your time is dedicated to clinical practice versus your administrative work helping to lead colleagues around AI initiatives?

At this point, it’s about 20 percent clinical and 80 percent administrative. I’m still practicing as a pediatric critical care physician and have been practicing clinically since 2010.

From your perspective as a clinician informaticists, what makes AI different in the clinical pediatric space?

There are three different, interrelated categories involved here. First, is the algorithm appropriate for the target population? Are the algorithms developed on adults, and will they be appropriate for children or not? There are similar issues with medications being tested and approved in adults.. The same principles apply in AI, and it is important to make sure that algorithms developed with adult data apply in pediatrics. Also, there’s so much variability across the age spectrum in pediatrics: a neonate might be incredibly different from a teenager, for example. It is important to test algorithms across the age spectrum.

The second issue is that, to get to better, more appropriate algorithms, we need pediatric data, and that in itself is an incredible challenge. For one thing, there are additional regulatory requirements around acquiring pediatric data, which means that pediatric data is often cut out of projects because it’s harder to get to it in terms of regulation. And because of the smaller disease frequency, we’re starting with much smaller data sets anyway.

The third significant consideration in pediatrics  is around the complicated relationship between the child and their guardian. How do you get permission to use that data: is it the child? Their guardian? A teenager? Should parents release data? So that consent piece becomes a major issue. Furthermore, data from the child and the guardian sometimes gets intertwined in the medical record.  For example, for a child, patient language may refer to the parent’s language rather than the child’s.

There is also opportunity to apply AI to help manage the complicated child / guardian relationship. We have used AI to identify the individual being addressed is it the parent I’m talking to, or the teenager, when interacting around our portal, for example? For the most part, we want the parent to be involved, and the child wants the parent involved. But some very vulnerable populations among teenagers, they may need access to confidential care, so we need to separate out those scenarios. And that’s where AI can be helpful.

For example, in California, we have state laws that say that teenagers have the right to consent to care around pregnancy prevention, STD treatment, drug and alcohol treatment and mental health treatment, and very specific laws around what the parent can see without the patient’s permission. We have to protect the teenager’s rights. We have to look at a note and say, 95 percent of the information will be benign and important for the guardian to know, but 5 percent might require different protections.

Are there clinical areas where it becomes more difficult to develop algorithms? And what about sepsis, which leaders in patient care organizations have found turns out to be a very difficult area in which to advance?

You’re correct, sepsis is a very challenging area. The difficulty involves developing the algorithm, and then how you respond to that algorithm. And no algorithm is perfect in terms of prediction and advisement for intervention. And developing an algorithm specific enough to design a process and an intervention is even more complicated in pediatrics, because the number of children developing sepsis is already so much smaller; and you have neonates to teenagers, with a huge spectrum of levels of physical development. And where most adults will get fevers because of infection, a neonate’s temperature will drop.

So, whenever we’re looking at high-stakes but low-frequency events, like sepsis, that will be the holy grail and really hard to get after. As we get to standardized processes, documentation, and physiology, we’ll make progress. But there’s so much lower-hanging fruit where we can improve patient care and operations. High-stakes, low-frequency topics can be saved for later.

What are some of the low-hanging-fruit areas?

Patient engagement has been such a major topic and is so critical for us, and we need to communicate effectively with teenagers and parents. So, we’ve developed an NLP algorithm to determine where there’s confidential content. And we’ve developed a template for physicians to optimize documentation, to make sure that we are able to share most notes with the patient and family, because they’re appropriate notes. We’ve also done a lot of work on the patient portal, to allow teenagers to gradually take over more responsibility for their own care with providers. We developed a natural language processing algorithm here to evaluate messages from teenagers in the patient portal. We shared the algorithm with Rady Children’s Hospital in San Diego, and  Nationwide Children’s Hospital in Columbus, Ohio—and we found that, regarding the portals we set up for the teenagers, in most cases, the parents were using those.. So, we’ve worked with vendors to get the accounts set up correctly. And of course large-language models are everywhere, and we’re testing the large-language models to determine whether you’re talking to the patient or the guardian. That allows us to communicate better with patients and families.

Could you share another example?

One is looking at patients with acute kidney injury in the hospital and trying to determine whether they might go on to have chronic kidney illness, so that  you could refer them to an appropriate follow-upclinic. Another one is using large-language models to review our clinical incident reports and identify clinical themes. For example, we’ve looked at clinical incident reports in obstetrics to determine areas where maternal hemorrhage has taken place, and we’re using that to determine where there might be process issues, and using the data to intervene earlier.

How will this evolve forward over the next few years?

There’s so much work to be done. And there are multiple layers. How can machine learning support care? And how do we engage our staff and patients? There’s so much opportunity in the operational and clinical spaces, that we’ll continue to pursue that. And how do we create our systems and continue to advocate for appropriate regulations? We’re thinking through how we develop these systems, appropriately evaluate them, and update them, to improve care. There’s so much work to be done in the process, regulatory, and policy space, to meaningfully introduce and improve these tools.

Might you be able to share any advice for those just beginning the journey?

Have a good governance process over the implementation of these tools; create a process for evaluation, implementation, and post-implementation evaluation and the ongoing sustainability of programs. And think about the high-value, low-risk areas for your first interventions. That’s the smart way to go about it. There’s so much opportunity in relatively low-risk spaces; look at those opportunities first, and venture into the higher-risk spaces after you’ve gotten more experience. Make sure you have the resources available to calibrate tools you’re using in your own organization or work with a vendor to optimize tools for your organization.

 

Sponsored Recommendations

Discover how to look beyond the hype to develop a responsible generative AI strategy
Explore how healthcare leaders are shifting from reactive maintenance to proactive facility strategies. Learn how data-driven planning and strategic investment can boost operational...
Navigate healthcare's facility challenges. Get strategies to protect assets and ensure long-term stability.
Join Claroty, Cisco, and Children's Hospital Los Angeles (CHLA) on-demand as they uncover the reasons behind common pitfalls encountered by hospitals in network segmentation efforts...