Solid Clinical Recommendations

Oct. 23, 2015
Now that we have broad deployment of electronic health records, how do we make recommendation systems trustworthy?

Solid Clinical Recommendations

Now that we have broad deployment of electronic health records, how do we make recommendation systems trustworthy?

In a recent article, Peter Bregman described a rapid escalation of anger between a father (Bregman) and his daughter over the messiness of a project that happened to involve child play with sand. He was making the point that parents, and managers in general, should adopt a simple and effective approach to communication by verbally

1.    Identify the problem

2.    State what needs to happen

3.    Offer to help

He provides an example in a business context as follows:

Fred, this presentation made six points instead of one or two. I’m left confused. It needs to be shorter, more to the point, and more professional looking. Would it help if we talk about the point you’re trying to make?

Absent this kind of approach, defensive posturing and angry, heated and defensive arguments are the rule. The participants rage fully abandon the dialogue. There is an important lesson here for informatics.

The connection is that decision support is often a statement of what needs to happen. Content, suggestive, and prescriptive analytics may often help to better predict diagnoses and therapies, and lead to better outcomes. This all presupposes a minimal discipline that, when absent, deeply erodes trust. And with the erosion comes the abandonment of the solution. 

For example, earlier this year I was working with a team using natural language processing and Medicare rules for risk adjustment of Medicare Advantage enrollees’ diagnosis lists. The goal was to surface relevant diagnoses that weren’t otherwise explicitly and adequately documented. We identified lots of amazing opportunities. In our zeal to not miss anything, we also recommended diagnoses we really shouldn’t have. Asking a doctors to sign-off on a naïve recommendation will erode confidence, with the risk of solution abandonment.

If, on your first use of a GPS, you were wrongly led into a dead end alley, you’d feel burned. The accuracy of recommendations needs to be right 99 percent of the time, adequately informed, and transparent so that trust can be quickly established and maintained.

What follows is a list of the five minimal elements necessary to earn and maintain that trust:

1.    Expose the known information

2.    Raise associated considerations

3.    Assure important contextual considerations by summarizing #1 and #2

4.    Think it forward and backward

5.    Apply sanity checks

In the first writing of this post, when I explained each of the five sequential elements above, my editor told me it was too tedious. So I’ll simply elaborate the fifth element, sanity checks, with this example shown in the graphic beginning with “a=b” and ending with “2=1.”   

Obviously, the conclusion is wrong. The algebraic operations are each legitimate. But what’s missing is the wisdom that the last step, while symbolically legitimate, requires division by zero, which is practically illegitimate. Any set of decision support rules can produce results that are illegitimate in some real clinical context. That’s why we must always screen for them.

In conclusion, healthcare can be complex, involving the input from the patient and many care givers over months, even years. Computer systems, often employing thousands of rules, need to be appropriately constructed to serve the needs of Population Health Management, quality Improvement, and delivering the highest quality care to people in need. 

The five steps outlined here are essential to accurately synthesize and summarize the considerations to achieve our shared goal. That is, to ensure that everyone can act at the level that the best of us do. And we need to trust the systems that make that possible, because are proven to be trustworthy in the hands of competent users. That translates to making the output of those five steps transparent.

Bregman describes that doing the right thing will feel inauthentic. Here, this means “stating the key facts, describing the necessary action, and offering to help the user take that action.” It’s not likely this is the designer’s instinct or first impulse. However, it needs to be, and the implications are clear. 

Solid clinical recommendations require this simple framing. Otherwise, we’re simply automating ineffective behaviors that lead to anger, defensiveness, and abandonment of immature solutions. As Bregman shares in his closing, having the right dialogue leaves the recipient of the support feeling positive and grateful.

What do you think?    

Sponsored Recommendations

Healthcare Rankings Report

Adapting in Healthcare: Key Insights and Strategies from Leading Systems As healthcare marketers navigate changes in a volatile industry, they know one thing is certain: we've...

Healthcare Reputation Industry Trends

Navigating the Tipping Point: Strategies for Reputation Management in a Volatile Healthcare Environment As healthcare marketers navigate changes in a volatile industry, they can...

Clinical Evaluation: An AI Assistant for Primary Care

The AAFP's clinical evaluation offers a detailed analysis of how an innovative AI solution can help relieve physicians' administrative burden and aid them in improving health ...

From Chaos to Clarity: How AI Is Making Sense of Clinical Documentation

From Chaos to Clarity dives deep into how AI Is making sense of disorganized patient data and turning it into evidence-based diagnosis suggestions that physicians can trust, leading...