While suicide is the second-leading cause of death among young people worldwide, it is estimated that around two-thirds of people experiencing mental health challenges go unsupported. Advancements in artificial intelligence (AI) tools may offer opportunities to address severe gaps in mental healthcare, although clinicians must work through both efficacy and ethical questions.
At an Aug. 2 session of the NIH Collaboratory Grand Rounds, Murali Doraiswamy, a professor of psychiatry and behavioral services at the Duke University School of Medicine, made the argument that the use of AI and other digital tools in mental health requires a sustained focus on pragmatic clinical trials. “We should think about the evidence development in the same way we develop drugs,” he said.
Doraiswamy directs a clinical trials unit developing novel technologies and treatments for enhancing brain health. He also is co-chair of the World Economic Forum’s Global Future Council on Neurotechnologies, which recently released a white paper on the potential of technology and AI in mental health. In his talk, he set the stage by noting some of the problems technology could help address, including continuing stigmas, a lack of parity and integration with physical health. There is a dire shortage of psychiatrists in the United States, he said, noting that 77 percent of counties in the U.S. lack an adequate number of psychiatrists. Outside the United States, the situation is even worse. “We are practicing shallow medicine, with long wait times for appointments, short visits, very little face time, which feels like a ‘conveyor belt’ approach to psychiatrists, which in turn leads to burnout in psychiatric practice,” he said.
He also noted that reliability of diagnoses across psychiatrists tends to be good for some conditions but not for things like major depressive disorder or anxiety disorders. “Everything downstream is impacted by accuracy of diagnoses,” he added.
Doraiswamy described some promising pilot studies of machine learning used to diagnose high-risk patients or avatars to elicit responses, as well as crisis counseling via text messaging. He played a clip of SimSensei, a virtual therapist created at the University of Southern California institute for Creative Technologies. It has human-like gestures and integrates multisensory information from the patient. He suggested that guidelines need to be developed to measure the efficacy of new deployments of AI and other interventions in mental health, including making sure the design embeds responsible practices and meets transparency and ethical requirements.
“We need large pragmatic trials,” Doraiswamy said. “We need trials that are done in much the same way we do drug trials, and replicate them.” One example of a large public/private partnership is called the Remote Assessment of Disease and Relapse – Central Nervous System (RADAR-CNS). The collaborative research program will explore the potential of wearable devices and smartphone technology to help prevent and treat depression, multiple sclerosis and epilepsy.
He also mentioned that three months ago Duke and Harvard did a survey with Sermo, a social networking site for physicians. They asked 791 psychiatrists in 22 countries what they think the impact of AI will be on psychiatry in the next 25 years. Of the respondents, 6.7 percent said no influence, 42 percent said minimal influence, and about half said moderate or extreme influence. Twenty-five percent said the benefits would not outweigh the harms, and those numbers were higher among women and U.S. respondents than among males and people from other parts of the world, he noted. Asked if AI and avatars would be able to offer empathetic care in the next 25 years, 83 percent said machines are not going to be able to do this. “This is important,” Doraiswamy said, “because they are going to be the ones who will decide whether this will be integrated into care.”
Noting that technology could play a key role in meeting unmet mental healthcare needs, the white paper Doraiswamy co-authored calls for eight actions:
1. Create a governance structure to support the broad and ethical use of new technology in mental healthcare (including the collection and use of big data).
2. Develop regulation that is grounded in human rights law, and nimble enough to enable and encourage innovation while keeping pace with technological advances when it comes to ensuring safety and efficacy.
3. Embed responsible practice into new technology designs to ensure the technologies being developed for mental healthcare have people’s best interests at their core, with a primary focus on those with lived experience.
4. Adopt a “test and learn” approach in implementing technology-led mental healthcare services in ways that allow continual assessment and improvement and that flag unintended consequences quickly.
5. Exploit the advantages of scale by deploying innovations over larger communities with their consent.
6. Design in measurement and agree on unified metrics. To ensure efficacy and to inform the “test and learn” approach.
7. Build technology solutions that can be sustained (in terms of affordability and maintenance) over time.
8. Prioritize low-income communities and countries as they are the most underserved today and most likely to see tangible benefits at relatively low costs.