Top 10 Tech Trends: Natural Language Processing
The problem is one that poses virtually a Gordian knot dilemma for clinical informaticists in healthcare: how to balance the fundamental need of doctors to use physician documentation to create and preserve the patient narrative—for the doctor who initially sees the patient, as well as for subsequent doctors involved in that patient’s care—with the need to extract a host of other information from the physician note, for reimbursement, outcomes quality, population health, and other worthy purposes.
Given the proliferation of use cases beyond the fundamental patient care purposes being layered on top of the physician note, is it any wonder that U.S. physicians are becoming increasingly restive? Indeed, no one seems willing to predict whether or when the number of purposes that doctors are being required to fulfill through their documentation might decrease anytime soon, even as the doctors themselves have more and more been asking for improved task simplification and workflow support.
What to do? Enter natural language processing, 2.0 and beyond. What the most pioneering teams in healthcare are doing in this area is to begin to push the envelope of the best tool available to solve the physician-note purpose-balancing dilemma—natural language processing—forward to make balancing all those needs more fully, and maybe, just maybe, make physicians’ lives easier and in the process improve care delivery.
For a peek into the future, just ask Christopher Longhurst, M.D. and Jonathan Palma, M.D., what’s possible. Longhurst, the CMIO, and Palma, medical director of analytics, both at Lucile Packard Children’s Hospital at Stanford University, in Palo Alto, Calif., have been working on a type of natural language processing that they’re referring to as “text analytics.” And what might “text analytics” mean? “The way I think about it,” Longhurst says, “is that NLP is probably a subset of analytics; but whereas NLP requires a sort of a priori set of knowledge, whether ICD-9 or SNOMED, in order to extract data elements from a text, text analytics is the broadest way of looking at extraction. NLP expresses a specific approach, and text analytics is the general concept around extracting data from free-text data.”
So, for example, Palma says, “If you want to extract the list of possible diagnoses from a text document, you could do that with an NLP approach, where you’re trying to identify ICD-X codes in the document; or you could take a more inferential approach.” In other words, text analytics, or whatever anyone else might call it, will be a process in which the data extraction tool is able to case a wider, more general net. As Palma notes, “Chris and I are working on the ability to identify patients who have a hospital-acquired condition, such as a central line infection. Nobody clicks a text box saying, ‘This patient has a central-line infection,’ and there’s no ICD-9 code for central-line infection, but those are things that are captured in free-text documents.” So the potential is there, he insists, for broader searches that can support popula tion health management, value-based purchasing, and any number of other valuable purposes, while, potentially, moving the physician note back closer to its core clinical purposes.
COLLABORATIVE POTENTIAL IN CLAIMS DATA?
The potential for deriving data from patient information might even be greater in situations in which an enterprise encompasses both the provider and payer functions, as at the 20-plus-hospital University of Pittsburgh Medical Center (UPMC) health system. There, Pamela Peele, Ph.D., chief analytics officer in the UPMC Insurance Services Division, is leading a team of data analysts who are using their advanced NLP and other tools to appropriately extract information from claims data to support proactive health risk assessment and population health and care management, for UPMC patients who are also UPMC health plan members. The potential to transform claims data into information useful for improving patient care is indeed there, Peele insists. And if such initiatives as UPMC’s are successful, perhaps some of the needs weighing down the physician note might ultimately be shifted away from the note, too.
Coming back to physician documentation, the need to do something to transform the documentation process on behalf of physicians is becoming more urgent as more uses for the EHR get piled on top of already-existing ones, clinician informaticist leaders agree. Indeed, says S. Trent Rosenbloom, M.D., M.P.H., an associate professor at Vanderbilt University and a practicing internist and pediatrician at Vanderbilt University Medical Center (both in Nashville), “I think that the last thing people think about any EHR/EMR system is how well it supports physician documentation across the system.” What will be key to optimizing physician documentation, Rosenbloom says, will be to develop EHRs that allow the maximum flexibility for physicians to document in ways that work for them, while also extracting the data needed for all those other purposes. Advanced versions of NLP, Rosenbloom says—and he prefers the term “text processing” rather than “text analytics”—will be critical elements in that prescription, he believes.
Colin Banas, M.D., CMIO at VCU Health System in Richmond, Va., probably sums up the sentiments of many physician informaticists when he says, “I do have high hopes for natural language processing, because the sacred element of the note is to help us deliver care and help our colleagues deliver care from. And if NLP is part of that equation because it preserves the patient narrative, then I’m all for it, as part of a long-term solution that can help me and everyone else.”