New AI Tool to Assist Clinicians in Prescribing Medication

April 2, 2024
DrugGPT, developed at Oxford University, aims to help doctors prescribe medication

James Tapper of The Guardian reported on March 31 that the new Artificial Intelligence (AI) tool DrugGPT, developed at Oxford University in the UK, acts as a safety net for doctors prescribing medications. Additionally, the tool provides information to doctors to help their patients understand the medication's usage.

“Doctors and other healthcare professionals who prescribe medicines will be able to get an instant second opinion by entering a patient’s conditions into the chatbot. Prototype versions respond with a list of recommended drugs and flag up possible adverse effects and drug-drug interactions,” Tapper wrote.

“It will show you the guidance—the research, flowcharts, and references—and why it recommends this particular drug,” Prof David Clifton, with Oxford’s AI for Healthcare Lab, said in a statement. However, Clifton advised using the new tool to obtain recommendations. “It’s important not to take the human out of the loop,” he said.

The British Medical Journal reported that more than 237 million medication errors are made every year in England. According to the report, “the harms caused by medication errors have been recognized as a global issue.” On top of that, patients make mistakes with medications, Tapper wrote.

“Millions of medication-related medical mistakes occur each year in England alone, raising serious concerns about this issue. These mistakes can endanger lives and cause unneeded expenses. Patients who do not comply with recommended directions can contribute to medication-related problems,” Quincy Jon reported on March 31 for Tech Times.

Tapper noted that healthcare providers already use some mainstream AI tools, such as ChatGPT and Google’s Gemini, to check diagnoses and write notes. However, he reported, “International medical associations have previously advised clinicians not to use those tools, partly because of the risk that the chatbot will give false information, or what technologists refer to as hallucinations.”

“We are always open to introducing more sophisticated safety measures that will support us to minimize human error – we just need to ensure that any new tools and systems are robust and that their use is piloted before wider rollout to avoid any unforeseen and unintended consequences,” Dr. Michael Mulholland, vicechair of the Royal College of GPs said in a statement.

Sponsored Recommendations

Care Access Made Easy: A Guide to Digital Self-Service for MEDITECH Hospitals

Today’s consumers expect access to digital self-service capabilities at multiple points during their journey to accessing care. While oftentimes organizations view digital transformatio...

Going Beyond the Smart Room: Empowering Nursing & Clinical Staff with Ambient Technology, Observation, and Documentation

Discover how ambient AI technology is revolutionizing nursing workflows and empowering clinical staff at scale. Learn about how Orlando Health implemented innovative strategies...

Enabling efficiencies in patient care and healthcare operations

Labor shortages. Burnout. Gaps in access to care. The healthcare industry has rising patient, caregiver and stakeholder expectations around customer experiences, increasing the...

Findings on the Healthcare Industry’s Lag to Adopt Technologies to Improve Data Management and Patient Care

Join us for this April 30th webinar to learn about 2024's State of the Market Report: New Challenges in Health Data Management.