New AI Tool to Assist Clinicians in Prescribing Medication

April 2, 2024
DrugGPT, developed at Oxford University, aims to help doctors prescribe medication

James Tapper of The Guardian reported on March 31 that the new Artificial Intelligence (AI) tool DrugGPT, developed at Oxford University in the UK, acts as a safety net for doctors prescribing medications. Additionally, the tool provides information to doctors to help their patients understand the medication's usage.

“Doctors and other healthcare professionals who prescribe medicines will be able to get an instant second opinion by entering a patient’s conditions into the chatbot. Prototype versions respond with a list of recommended drugs and flag up possible adverse effects and drug-drug interactions,” Tapper wrote.

“It will show you the guidance—the research, flowcharts, and references—and why it recommends this particular drug,” Prof David Clifton, with Oxford’s AI for Healthcare Lab, said in a statement. However, Clifton advised using the new tool to obtain recommendations. “It’s important not to take the human out of the loop,” he said.

The British Medical Journal reported that more than 237 million medication errors are made every year in England. According to the report, “the harms caused by medication errors have been recognized as a global issue.” On top of that, patients make mistakes with medications, Tapper wrote.

“Millions of medication-related medical mistakes occur each year in England alone, raising serious concerns about this issue. These mistakes can endanger lives and cause unneeded expenses. Patients who do not comply with recommended directions can contribute to medication-related problems,” Quincy Jon reported on March 31 for Tech Times.

Tapper noted that healthcare providers already use some mainstream AI tools, such as ChatGPT and Google’s Gemini, to check diagnoses and write notes. However, he reported, “International medical associations have previously advised clinicians not to use those tools, partly because of the risk that the chatbot will give false information, or what technologists refer to as hallucinations.”

“We are always open to introducing more sophisticated safety measures that will support us to minimize human error – we just need to ensure that any new tools and systems are robust and that their use is piloted before wider rollout to avoid any unforeseen and unintended consequences,” Dr. Michael Mulholland, vicechair of the Royal College of GPs said in a statement.

Sponsored Recommendations

Patient Care Resolved: How Best-in-Class Providers Eliminate Obstacles to Reduce Cost

Healthcare organizations face numerous challenges impacting care delivery and patient experiences. By eliminating obstacles to patient care delivery they can reduce operating ...

Cyber Threats, Healthcare and the Near-Term Future of the Threat Landscape

The Healthcare industry continues to make the list, coming in as the sixth-most targeted sector for cyber attacks, according to CrowdStrike’s 2024 Global Threat Report. And it...

The Healthcare Online Reputation Management Guide

In today's landscape, consumers are increasingly initiating their buying journey online, which means that you no longer have direct control over your initial impression. Furthermore...

Care Access Made Easy: A Guide to Digital Self-Service for MEDITECH Hospitals

Today’s consumers expect access to digital self-service capabilities at multiple points during their journey to accessing care. While oftentimes organizations view digital transformatio...