Policy Researchers Examine SDOH Factors’ Application to Risk Adjustment in Risk-Based Payment

April 6, 2021
A team of healthcare policy researchers, writing in the April of Health Affairs, has examined the complex tangle of issues around whether or not to include social determinants of health factors in risk-based payment models

Even as it remains a relatively new area under consideration by policy leaders and others in healthcare, the use of risk adjustment using social determinants of health (SDOH) factors is something that could help to advance health equity, according to a team of healthcare policy researchers.

Writing in the April issue of Health Affairs, David R. Nerentz, Matthew Austin, Daniel Deutscher, Karen E. Joynt Maddox, Eugene J. Nuccio, Christie Teigland, Eric Weinhandle, and Laurent G. Glance have written the article “Adjusting Quality measures For Social Risk Factors Can Promote Equity in Health Care.”

As the researchers write in the abstract to their research article, “Risk adjustment of quality measures using clinical risk factors is widely accepted; risk adjustment using social risk factors remains controversial. We argue here that social risk adjustment is appropriate and necessary in defined circumstances and that social risk adjustment should be the default option when there are valid empirical arguments for and against adjustment for a given measure. Social risk adjustment is an important way to avoid exacerbating inequity in the health care system.”

The researchers note that, “As the Centers for Medicare and Medicaid Services (CMS) approaches the goal of tying 90 percent of traditional Medicare payments to the value of care delivered,1 risk adjustment—adjusting quality measures for patient health risk—takes on greater significance. Risk adjustment accounts for factors affecting a quality measure that do not reflect the quality of care. For example, if older patients have poorer outcomes for reasons not involving the quality of care, adjusting outcome measures for patient age produces fairer “apples to apples” comparisons among the entities being assessed. There is widespread consensus that it is appropriate to adjust quality measures—particularly outcome measures—for clinical factors present before care, such as age, comorbidities, and disease severity. However, adjusting for social factors at the patient level (for example, poverty) or the area level (such as neighborhood poverty), even when such adjustments are found to affect health care outcomes, remains controversial.”

Significantly, “The current policy of the National Quality Forum (NQF) allows adjustment for social risk. However, the Office of the Assistant Secretary for Planning and Evaluation (ASPE) at the Department of Health and Human Services recently released a report to Congress that recommended against adjusting for social risk factors for the process and outcome measures used in quality reporting and value-based purchasing programs.”

There are a number of elements involved in this discussion, including the fact that there had already been a debate around the inclusion of SDOH elements in the CMS Hospital Readmissions Reduction Program after safety-net hospitals were found to be disproportionately penalized in that program. But now, the researchers note, the landscape has shifted. As they write, “Decisions about adjusting quality measures for social risk have become more important given the evolving context of quality measurement. The CMS Quality Payment Program now includes individual physicians and physician groups, advanced practice providers, and other clinicians in value-based payment models, raising concerns about potential biases against individual safety-net providers. Minority physicians, who are more likely than other physicians to serve socially vulnerable populations,16 may be particularly likely to receive payment penalties and to be identified in public reporting programs such as Physician Compare as poor-quality physicians. Similarly, postacute care providers, dialysis facilities, accountable care organizations (ACOs), and other plan or provider entities are subject to potential bias in the quality metrics used for value-based purchasing in those programs.”

There are three scenarios under which SDOH might be included in quality measures, the researchers write. They include using them to support the hospitals and clinicians caring for a disproportionate number of socially disadvantaged patients; conversely, adjusting outcome measures for social risk “would have the unintended consequence of masking poor-quality care,” when socially disadvantaged patients receive “lower-quality care compared with other patients within the same provider.”

But what happens when “social risk factors have the same confounding effect on outcomes as clinical risk factors do, with the causal pathway to outcomes not involving quality of care”? The researchers write that, “For example, patients with heart failure who undergo surgery have worse outcomes than patients without heart failure. Hospitals performing surgery on more patients with heart failure have lower performance ratings than hospitals with fewer such patients even if they provide the same quality of care. This is why heart failure is included in risk-adjustment models for heart surgery.”

Ultimately, the article’s authors argue, “When risk-adjustment models include a large number of risk factors, the decision to include additional clinical risk factors does not usually depend on whether adding a risk factor improves a model’s statistical performance. Similarly, the decision to adjust for social risk should not be based primarily on whether adjustment makes a difference in model performance. Changes in the model discrimination (that is, the C-statistic) are not informative enough to enable decisions on whether to include an additional risk factor in a model.” And, they write, “We propose that the statistical evaluation of “does it matter” be primarily based on the proportion of providers that change ranking with or without social risk adjustment, rather than on a model fit metric such as change in C-statistic or correlation between adjusted and unadjusted scores. More specifically, when one or more ‘use cases’ for a measure can be identified, analysis should focus on the impact of adjustment in those ‘use cases.’”

As for their recommendations, the authors write that, “[F]or providers to accept the legitimacy of public reporting and value-based purchasing, they need to believe that risk-adjustment models adequately account for differences in case-mix. Providers need to know that they will not be penalized for caring for the sickest patients. Likewise, providers need to know that they will not be penalized for caring for patients with social risk factors.” Even so, they concede, “[W]e recognize that the case for social risk adjustment will not always be clear: There will be measures for which empirical arguments for and against adjusting for social risk are both valid.”

David R. Nerenz is the director emeritus of the Center for Health Policy and Health Services Research, Henry Ford Health System, in Detroit, Michigan. J. Matthew Austin is an assistant professor at the Johns Hopkins Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine, in Baltimore, Maryland. Daniel Deutscher is a senior research scientist at Net Health Systems, Inc., in Pittsburgh, Pennsylvania, and the director of patient reported outcome measures at the MaccabiTech Institute for Research and Innovation, Maccabi Healthcare Services, in Tel Aviv, Israel. Karen E. Joynt Maddox is an assistant professor of medicine in the Department of Internal Medicine, Washington University School of Medicine, in St. Louis, Missouri. And Eugene J. Nuccio is an assistant professor of medicine at the University of Colorado, Anschutz Medical Campus, in Denver, Colorado.

Christie Teigland is a principal in the health economics and advanced analytics practice at Avalere Health, in Washington, D.C.

Eric Weinhandl is a senior epidemiologist in the Chronic Disease Research Group at the Hennepin Healthcare Research Institute, in Minneapolis, Minnesota.

Laurent G. Glance is vice chair for research and a professor in the Department of Anesthesiology and Perioperative Medicine, University of Rochester School of Medicine, in Rochester, New York.

Sponsored Recommendations

AI-Driven Healthcare: Empowering Nurses, Clinicians, and Care Teams for Smarter, More Efficient Care

Explore how AI-first ThinkAndor® is transforming nursing workflows and patient care at Sentara, improving outcomes, reducing readmissions, and enhancing care transitions in this...

The Future of Storage: The Complexities and Implications in Healthcare

Join us on January 23rd to explore the future of data storage in healthcare and learn how strategic IT decisions today can shape agility and competitiveness for tomorrow.

IT Healthcare Report: Technology Insights for a Transformative Future

Explore the latest healthcare IT trends, challenges, and opportunities in AI, patient care, and security. Gain actionable insights to navigate the industry's transformation.

How to Build Trust in AI: The Data Leaders’ Playbook

This eBook strives to provide data leaders like you with a comprehensive understanding of the urgent need to deliver high-quality data to your business. It also reviews key strategies...