Solid Clinical Recommendations

Oct. 23, 2015
Now that we have broad deployment of electronic health records, how do we make recommendation systems trustworthy?

Solid Clinical Recommendations

Now that we have broad deployment of electronic health records, how do we make recommendation systems trustworthy?

In a recent article, Peter Bregman described a rapid escalation of anger between a father (Bregman) and his daughter over the messiness of a project that happened to involve child play with sand. He was making the point that parents, and managers in general, should adopt a simple and effective approach to communication by verbally

1.    Identify the problem

2.    State what needs to happen

3.    Offer to help

He provides an example in a business context as follows:

Fred, this presentation made six points instead of one or two. I’m left confused. It needs to be shorter, more to the point, and more professional looking. Would it help if we talk about the point you’re trying to make?

Absent this kind of approach, defensive posturing and angry, heated and defensive arguments are the rule. The participants rage fully abandon the dialogue. There is an important lesson here for informatics.

The connection is that decision support is often a statement of what needs to happen. Content, suggestive, and prescriptive analytics may often help to better predict diagnoses and therapies, and lead to better outcomes. This all presupposes a minimal discipline that, when absent, deeply erodes trust. And with the erosion comes the abandonment of the solution. 

For example, earlier this year I was working with a team using natural language processing and Medicare rules for risk adjustment of Medicare Advantage enrollees’ diagnosis lists. The goal was to surface relevant diagnoses that weren’t otherwise explicitly and adequately documented. We identified lots of amazing opportunities. In our zeal to not miss anything, we also recommended diagnoses we really shouldn’t have. Asking a doctors to sign-off on a naïve recommendation will erode confidence, with the risk of solution abandonment.

If, on your first use of a GPS, you were wrongly led into a dead end alley, you’d feel burned. The accuracy of recommendations needs to be right 99 percent of the time, adequately informed, and transparent so that trust can be quickly established and maintained.

What follows is a list of the five minimal elements necessary to earn and maintain that trust:

1.    Expose the known information

2.    Raise associated considerations

3.    Assure important contextual considerations by summarizing #1 and #2

4.    Think it forward and backward

5.    Apply sanity checks

In the first writing of this post, when I explained each of the five sequential elements above, my editor told me it was too tedious. So I’ll simply elaborate the fifth element, sanity checks, with this example shown in the graphic beginning with “a=b” and ending with “2=1.”   

Obviously, the conclusion is wrong. The algebraic operations are each legitimate. But what’s missing is the wisdom that the last step, while symbolically legitimate, requires division by zero, which is practically illegitimate. Any set of decision support rules can produce results that are illegitimate in some real clinical context. That’s why we must always screen for them.

In conclusion, healthcare can be complex, involving the input from the patient and many care givers over months, even years. Computer systems, often employing thousands of rules, need to be appropriately constructed to serve the needs of Population Health Management, quality Improvement, and delivering the highest quality care to people in need. 

The five steps outlined here are essential to accurately synthesize and summarize the considerations to achieve our shared goal. That is, to ensure that everyone can act at the level that the best of us do. And we need to trust the systems that make that possible, because are proven to be trustworthy in the hands of competent users. That translates to making the output of those five steps transparent.

Bregman describes that doing the right thing will feel inauthentic. Here, this means “stating the key facts, describing the necessary action, and offering to help the user take that action.” It’s not likely this is the designer’s instinct or first impulse. However, it needs to be, and the implications are clear. 

Solid clinical recommendations require this simple framing. Otherwise, we’re simply automating ineffective behaviors that lead to anger, defensiveness, and abandonment of immature solutions. As Bregman shares in his closing, having the right dialogue leaves the recipient of the support feeling positive and grateful.

What do you think?    

Sponsored Recommendations

A Cyber Shield for Healthcare: Exploring HHS's $1.3 Billion Security Initiative

Unlock the Future of Healthcare Cybersecurity with Erik Decker, Co-Chair of the HHS 405(d) workgroup! Don't miss this opportunity to gain invaluable knowledge from a seasoned ...

Enhancing Remote Radiology: How Zero Trust Access Revolutionizes Healthcare Connectivity

This content details how a cloud-enabled zero trust architecture ensures high performance, compliance, and scalability, overcoming the limitations of traditional VPN solutions...

Spotlight on Artificial Intelligence

Unlock the potential of AI in our latest series. Discover how AI is revolutionizing clinical decision support, improving workflow efficiency, and transforming medical documentation...

Beyond the VPN: Zero Trust Access for a Healthcare Hybrid Work Environment

This whitepaper explores how a cloud-enabled zero trust architecture ensures secure, least privileged access to applications, meeting regulatory requirements and enhancing user...