Can Quality Reporting, Clinical Decision Support Go Hand in Hand?
In his reporting from the HIMSS conference in Orlando, my colleague Rajiv Leventhal described the mood of exhaustion providers feel about the array of regulatory changes and federal programs they have to report on, including meaningful use, ICD-10, PQRS, HIPAA, and ACO measures.
Rajiv noted that at the ONC Town Hall on Feb. 24 at HIMSS, a member of a Kentucky regional extension center asked whether there were any effort under way to align all of the regulatory requirements on physicians so it’s not one on top of the next. “When can we simply practice medicine?” the attendee asked in frustration.
I mention this because I saw a presentation at HIMSS that addressed this problem head on. Executives from the 700-bed Medical University of South Carolina (MUSC) described how they are working to develop a single, comprehensive organizational blueprint for meeting quality measure requirements.
One hint that MUSC has spent some time thinking about this topic is that they have an executive with the title “manager of regulatory analytics.” The woman who holds that title, Itara Barnes, described how, like most large healthcare organizations, MUSC faces an extensive list of measures required by certification and accreditation bodies, federal and private payers, public health reporting programs, and its own organizational quality initiatives. Previously, reporting response efforts operated in silos. She said a complicating factor is that measure concepts are used in multiple programs but applied with differing specifications, versions of specifications, and submission mechanisms.
MUSC decided to step back and review measures across care settings and group them by family (a group of clinical quality measures related to a single care process or disease state, such as diabetes) to identify common structured data requirements and assess their impact on measurement.
“Where measures are used for multiple programs or have multiple versions of specifications, we are defining one comprehensive workflow that collects data used for all programs,” Barnes said.
MUSC considers data collected through all relevant activities and points in the care process, not just the single point targeting the measure’s clinical quality action. “This is a team sport,” Barnes said. “Everyone has a place in this process. When they understand where they touch the measure and how they can influence the outcome, we get more buy-in.”
And rather than keeping the focus on meeting reporting requirements, MUSC is building the reporting into the underlying evidence-based care guidelines in the EHR. “We are developing comprehensive evidence-based clinical decision support tools to drive cultural transition and compliance and to integrate data capture into workflow in a meaningful way,” said Elizabeth Crabtree, director of evidence-based practice and an assistant professor at MUSC.
The organization’s Clinical Decision Support Oversight Committee works to design and implement CDS tools to drive evidence-based practice with data capture for reporting built-in, resulting in quality measures inextricably linked to care. “We have shifted our focus on measurement to look at the processes of care. That way, providers are more engaged in quality measures,” Crabtree said. “They are not just focused on reporting and regulations. It becoming meaningful, and more of a feedback loop.”
“I see embedding data capture for reporting in evidence-based order sets as icing on a cake,” she said. “You wouldn’t like the cake without it. They go hand in hand in a nice fashion.”