James Tapper of The Guardian reported on March 31 that the new Artificial Intelligence (AI) tool DrugGPT, developed at Oxford University in the UK, acts as a safety net for doctors prescribing medications. Additionally, the tool provides information to doctors to help their patients understand the medication's usage.
“Doctors and other healthcare professionals who prescribe medicines will be able to get an instant second opinion by entering a patient’s conditions into the chatbot. Prototype versions respond with a list of recommended drugs and flag up possible adverse effects and drug-drug interactions,” Tapper wrote.
“It will show you the guidance—the research, flowcharts, and references—and why it recommends this particular drug,” Prof David Clifton, with Oxford’s AI for Healthcare Lab, said in a statement. However, Clifton advised using the new tool to obtain recommendations. “It’s important not to take the human out of the loop,” he said.
The British Medical Journal reported that more than 237 million medication errors are made every year in England. According to the report, “the harms caused by medication errors have been recognized as a global issue.” On top of that, patients make mistakes with medications, Tapper wrote.
“Millions of medication-related medical mistakes occur each year in England alone, raising serious concerns about this issue. These mistakes can endanger lives and cause unneeded expenses. Patients who do not comply with recommended directions can contribute to medication-related problems,” Quincy Jon reported on March 31 for Tech Times.
Tapper noted that healthcare providers already use some mainstream AI tools, such as ChatGPT and Google’s Gemini, to check diagnoses and write notes. However, he reported, “International medical associations have previously advised clinicians not to use those tools, partly because of the risk that the chatbot will give false information, or what technologists refer to as hallucinations.”
“We are always open to introducing more sophisticated safety measures that will support us to minimize human error – we just need to ensure that any new tools and systems are robust and that their use is piloted before wider rollout to avoid any unforeseen and unintended consequences,” Dr. Michael Mulholland, vicechair of the Royal College of GPs said in a statement.