New AI Tool to Assist Clinicians in Prescribing Medication

April 2, 2024
DrugGPT, developed at Oxford University, aims to help doctors prescribe medication

James Tapper of The Guardian reported on March 31 that the new Artificial Intelligence (AI) tool DrugGPT, developed at Oxford University in the UK, acts as a safety net for doctors prescribing medications. Additionally, the tool provides information to doctors to help their patients understand the medication's usage.

“Doctors and other healthcare professionals who prescribe medicines will be able to get an instant second opinion by entering a patient’s conditions into the chatbot. Prototype versions respond with a list of recommended drugs and flag up possible adverse effects and drug-drug interactions,” Tapper wrote.

“It will show you the guidance—the research, flowcharts, and references—and why it recommends this particular drug,” Prof David Clifton, with Oxford’s AI for Healthcare Lab, said in a statement. However, Clifton advised using the new tool to obtain recommendations. “It’s important not to take the human out of the loop,” he said.

The British Medical Journal reported that more than 237 million medication errors are made every year in England. According to the report, “the harms caused by medication errors have been recognized as a global issue.” On top of that, patients make mistakes with medications, Tapper wrote.

“Millions of medication-related medical mistakes occur each year in England alone, raising serious concerns about this issue. These mistakes can endanger lives and cause unneeded expenses. Patients who do not comply with recommended directions can contribute to medication-related problems,” Quincy Jon reported on March 31 for Tech Times.

Tapper noted that healthcare providers already use some mainstream AI tools, such as ChatGPT and Google’s Gemini, to check diagnoses and write notes. However, he reported, “International medical associations have previously advised clinicians not to use those tools, partly because of the risk that the chatbot will give false information, or what technologists refer to as hallucinations.”

“We are always open to introducing more sophisticated safety measures that will support us to minimize human error – we just need to ensure that any new tools and systems are robust and that their use is piloted before wider rollout to avoid any unforeseen and unintended consequences,” Dr. Michael Mulholland, vicechair of the Royal College of GPs said in a statement.

Sponsored Recommendations

TEST: Ask the Expert: Is Your Patients' Understanding Putting You at Risk?

Effective health literacy in healthcare is essential for ensuring informed consent, reducing medical malpractice risks, and enhancing patient-provider communication. Unfortunately...

From Strategy to Action: The Power of Enterprise Value-Based Care

Ever wonder why your meticulously planned value-based care model hasn't moved beyond the concept stage? You're not alone! Transition from theory to practice with enterprise value...

State of the Market: Transforming Healthcare; Strategies for Building a Resilient and Adaptive Workforce

The U.S. healthcare system is facing critical challenges, including workforce shortages, high turnover, and regulatory pressures. This guide highlights the vital role of technology...

How AI-Native Locating Intelligence Revolutionizes the RTLS market

Discover how leveraging an RTLS solution with artificial intelligence as the location engine can increase efficiency, improve safety, and elevate care without the compromises ...