As an industry reliant on patient records and beset by outdated technology, healthcare is widely thought to be a prime target for an artificial intelligence revolution.
Many believe the technology will provide a host of benefits to clinical practitioners, speeding up the overall experience and diagnosing illnesses early on to identify potential treatment.
Just two days ago, DeepMind, an AI (artificial intelligence) firm owned by Google, said it had lent its technology to London’s Moorfields Eye Hospital for groundbreaking research into detecting eye diseases. It was used to scan and identify more than 50 ophthalmological conditions. DeepMind’s machine-learning technology made correct diagnoses 94% of the time, Moorfields said.
The development indicated that AI can analyze health problems with as much accuracy as a doctor. But some doctors worry that those in the tech world think AI can not only help clinicians, but even do a better job.
Take Babylon Health for instance, which in June said its AI chatbot was able to diagnose medical conditions as accurately as a doctor. The firm’s chatbot scored a higher-than-average test score on the official exam set for physicians by the Royal College of General Practitioners (RCGP), an industry body representing GPs — doctors that treat a wide range of common illnesses.
Babylon’s chatbot passed 82% of the test’s questions, versus the average mark for human doctors of 72%.
But the RCGP quickly disputed the claim that AI could diagnose illnesses with the same effectiveness of a human medical practitioner.
Babylon at the time denied it had claimed an AI could do the job of a GP, saying that it supported a model where AI is complementary to medical practice.
Nevertheless, the spat highlighted a serious question that may one day need to be addressed by those in the health industry: How should health professionals respond to the rapid growth in new, data-driven technologies like AI?
“Over the next decade or two, AI will certainly play a big role in supporting doctors make decisions,” Dan Vahdat, chief executive of health tech start-up Medopad, told CNBC via email.
“The role of the doctor will have to adapt in learning how to use AI to complement their clinical judgments. This will take time, but it’s inevitable.”
Medopad specializes in connecting healthcare providers, doctors, and patients to monitor a patient’s health data and see how their care can be improved.
In the U.K., the National Health Service, the country’s universal healthcare system, has come under strain both in terms of funding and resources. The promise of AI to reduce the financial burden on medical services by cutting out some disposable roles and functions could be music to the ears of government and health authorities.
Vahdat said that one area in which AI could be hugely beneficial in improving efficiency and cutting costs was cardiological care.
But some experts fear the fast-paced nature of the still nascent AI industry could come at the risk of patient safety.
“We are concerned that in the rush to roll out AI and push the boundaries of technology, there is a risk that important checks and balances that have been established to keep patients safe might be seen as an afterthought, or be bypassed entirely,” Helen Stokes-Lampard, a professor and chair of the RCGP said.
Last month, a report by online health publication Stat said that IBM’s Watson supercomputer had made multiple “unsafe and incorrect” cancer treatment recommendations, citing internal company documents. According to Stat, the program had only been trained to deal with a small number of cases and hypothetical scenarios instead of actual patient data. IBM subsequently told CNBC that it has “learned and improved Watson Health based on continuous feedback from clients, new scientific evidence and new cancers and treatment alternatives.”
Stokes-Lampard said that regulators should keep pace with the rapid advances in technology to avoid harm to patients.
She said: “In an ever-changing ‘tech space,’ it is imperative that regulation keeps up with all technological developments, and that it is appropriately enforced, so that patients are kept safe, however they choose to access care.”
But many tech companies—big and small—are mostly averse to new regulation, arguing it could restrain innovation.