A paper published Sept. 19 in the New England Journal of Medicine describes how researchers at NYU Langone Health in New York have developed a program for carrying out randomized quality improvement projects to evaluate the effectiveness of routine hospital processes.
Leora Horwitz, M.D., associate professor in the Departments of Population Health and Medicine and director of the Center for Healthcare Innovation and Delivery Science, and several colleagues co-authored the paper, which describes NYU Langone’s approach.
"This program is important because there are always better ways to do things, but unless we have some data to show us that what we're doing is not fully effective, we have no incentive or inclination to find a better way to do it," Horwitz said in a prepared statement from NYU. "Unless we study whether what we're doing is working, we cannot allocate the resources that we have most effectively. And that means that we're not necessarily providing the best possible care to our patients."
NYU gave examples of processes it has evaluated, including calling patients to encourage follow-up visits or providing doctors and nurses with alerts in electronic health records to prompt reminders for vaccinations, adding that “most of the time, hospitals cannot tell whether these processes are working optimally.”
Horwitz and her team implemented randomized quality improvement projects throughout NYU Langone-- spanning inpatient units, outpatient offices and the emergency department--and included efforts to improve care after hospitalization, increase receipt of recommended preventive screening, capture patient-reported outcomes, and increase smoking cessation counseling rates, among other topics.
In one such project, which tested a new program to telephone patients after discharge from the hospital, patients with odd-numbered electronic records received calls, while those with even-numbered records did not receive calls. Another project, which looked at phone scripts used to remind patients of annual well visits, randomized patients to script A or B, and then each script was used in alternating weeks for several weeks.
Among the findings was that changing the text of a prompt to provide tobacco cessation counseling in the office produced a statistically significant increase in medication prescription rates. Changing a few sentences in telephone outreach scripts shortened phone calls and increased annual visit appointments.
The researchers also found that the post-discharge phone calls to patients were largely ineffective. Patients who received phone calls returned to the hospital at the same rate as those who didn't receive calls. Patients receiving calls also had the same satisfaction ratings with the hospital as those who didn't receive calls.
With this knowledge, hospital staff now have options to determine where to focus their resources, Horwitz noted. Call scripts may need to be changed or staff might only need to call the high-risk patients instead of calling every patient.
"I believe we have an ethical responsibility to rigorously assess whether our operational interventions are effective, even when they may seem trivial, such as scripts for calls or mailings that we send to people to get them to get their colonoscopy," Horwitz added. "If we don't, we can't be sure we are doing the best by our patients."
The randomization programs were designed to be easy to implement. Staff do not need to use special randomization tools or maintain separate databases. People doing the work only need to make small changes in their workflow, such as changing the script they use from one week to the next, she said.
"This can be done quite easily. It doesn't have to be a $10 million, 10-year NIH trial," Horwitz said. "We can build these things into the routine way we do our work."