There are great reasons for using transcription. It is highly efficient and creates an information-rich and tremendously effective vehicle for human communication. However, transcription has historically been tarred with the reputation that human abstractors are needed to transform the data into computer-readable codes to support secondary uses such as analytics, reporting, decision support, billing, and research and quality initiatives. This is effective, but costly and time consuming. If the existing documentation is not complete enough to support these uses, then the abstractors must engage with the clinicians to “fill in the blanks.” This further increases costs and delays.
Can we reap the benefits of transcription’s unique ability to support clinical care while also supporting the various secondary uses of the data and still avoid the costs and delays of human abstraction? Transcription is ultimately a transaction between the person creating the documentation and the downstream consumers of that documentation. Even document creators eventually consume their own documentation. We need to make sure that both ends of the process have the support they need to succeed. Today’s technologies finally make that possible, and a number of institutions are already embarking on “end-to-end transcription.”
With end-to-end transcription, they are building systems that display a patient’s lifetime history of problems, medications, allergies and other clinical information. This information is collated and neatly organized into an easily reviewed summary, even though much of that data was originally recorded solely in the free-text dictations. Each problem, medication and allergy is tied to each underlying document in which it was found, so that the caregiver can quickly and easily verify the information and view it in clinical context.
With end-to-end transcription, they are building dashboards with graphs, lists and drill-down capabilities that drive strategic planning and continuous improvement efforts. These efforts are powered by the free-text transcription data. Dashboards show the clinical quality of care, clinician adherence to care guidelines and performance against meaningful-use metrics (such as how often patients with chronic obstructive pulmonary disease had their lung function evaluated with spirometry, which patients were missed and which doctors were most and least likely to document adherence to the guidelines). Dashboards also support population health and ACO activities, such as identifying patients with chronic diseases and stratifying them by severity in order to assess risk and target interventions.
Dashboards show the quality of documentation to support appropriate coding and billing (particularly in an ICD-10 era), such as identifying which clinicians most often fail to document the stage of a patient’s chronic kidney disease or the specific type of congestive heart failure. Those dashboards also provide a list of visits for which excisional debridement was almost certainly performed, but the documentation only supports coding for simple debridement. Each such under-coded visit can represent thousands of dollars in lost revenue. Even over-coding can often be identified, helping to reduce the risk of accusations of fraud in an RAC audit.
Perhaps most importantly, dashboards are surfacing that give real-time feedback to the clinician during dictation to drive improvements at the point of care. This non-interruptive, automated feedback helps the clinician record the visit with the right details needed to support billing, coding, regulatory reporting and research. This feedback also helps the clinician follow commonly accepted standards of care that improve quality and patient outcomes, such as identifying when a radiologist has documented a critical finding (e.g., a collapsed lung), but has not documented that the treating physician was notified of the problem. Real-time feedback during dictation helps clinicians generate the best decisions and documentation at the point of care. This enables downstream activities to proceed with greater efficiency and effectiveness. The goal is to achieve increased revenue, reduced costs and, most importantly, better patient outcomes – all with less effort on the part of already overworked clinicians.
To achieve this ideal end-to-end transcription state requires the right supporting architecture. That includes capture and retention of all the relevant data into a fast and flexible data store (preferably cloud-hosted to reduce costs and enhance manageability); real-time translation from free-text transcription directly into meaningful use-mandated codes that computers can understand, such as SNOMED-CT, RxNorm, and LOINC; additional metadata tagging to describe the context in which those codes were found, such as the certainty (e.g., positive, negative, maybe), timing (past, present, future) and subject (patient or family member); and an infrastructure to meaningfully and flexibly query the data and display it in dashboards, worklists and at the point of care.
It’s been a long time in the making, but end-to-end transcription is finally here, and it’s starting to change everything we thought we knew about creating clinical documentation.
About the Author
Jon Handler, M.D., is chief medical information officer, M*Modal. For more on M*Modal: www.rsleads.com/305ht-204