One of the more interesting panels at last week’s Health Datapalooza featured four speakers involved in the application of artificial intelligence to healthcare, including the creation of predictive models.
In areas involving massive amounts of information in the diagnostic and genomic space, machine learning is already in use today, and the FDA is starting to approve applications of deep learning. For instance, a company called Arterys recently won FDA approval for its Cardio DL application, which uses deep learning to automate time-consuming analyses and tasks that are performed manually by clinicians today.
Although they each come at it from a different angle based on their company’s focus, there were several overarching themes the Datapalooza panelists tackled about the application of algorithms in healthcare, including the importance of transparency to getting clinician engagement.
Getting buy-in from clinicians is a huge challenge, said Eric Just, a senior vice president for product development at Health Catalyst, which builds analytics and decision support tools for its health system customers. “Transparency is a big deal,” he said. Developing a predictive model requires input from clinicians on what they think is most important. “It is not enough to show them a risk score. You have to show risk factors so they can see the reasons why. The socialization part can’t be stressed enough,” he said. You have to show them how a model was built and the evidence behind it. Literature citations are important, he added. In fact, Just said, a key use of data scientists’ time is to explain and interpret the model for clinician users.
Christopher Khoury, vice president of the environmental intelligence, strategic partnerships and innovation group at the American Medical Association, said that because the early gains in AI have been in pattern recognition, we are already seeing plenty of applications in radiology and pathology. “They can add a lot of value in terms of decision-making ability, as well as scale and throughput,” he said.
Clinicians want to embrace data and this is another tool in the toolbox, he said. But the evidence basis for clinical validity is important, and the rate of change is important. Physicians want changes involving machine learning to occur at an appropriate pace, with clinicians in the loop. For that reason, the adoption might be slower than in other industries such as finance, but appropriately so, Khoury said. “Physicians are early adopters of new technology if it improves their work and there is evidence of validity. The same principles apply to machine learning. The key is to bring them along and include them in the development, not just as end-users.”
The question of how these technologies get incorporated in the healthcare field gets more complex when applied to diagnosis and treatment, rather than patient safety or operational efficiency such as scheduling, Khoury noted.
The physician training aspect of the changes under way is huge. “We are still in an era where they struggle with EHRs and the digital environment,” he noted, adding that there will have to be discussion of the intangibles a clinician brings from years of study and practice beyond just confirming what an algorithm has recommended for a patient.
Chris Mansi, M.D., is a neurosurgeon and the cofounder and CEO of Viz, a deep learning medical imaging company based in Silicon Valley. Viz uses deep learning to automatically interpret complex brain scans facilitating the faster treatment of stroke.
“We use the same technology that Facebook uses — deep learning — to pick out patterns and categorize them for differential diagnosis,” Mansi said. The system examines images such as CT scans looking for patterns and determines which patients need to be seen fast. Mansi said that the deep learning aspect is not the hardest part of launching the business. It is more challenging to clean up the data, understand how to integrate it into clinical workflow, and overcome legal and FDA requirements. “All those are quite hard to do unless you are focused on one narrow niche.”
Eric Williams is vice president of data science and analytics at Omada Health, which is using machine learning in its focus on behavior change in the face of chronic disease. Patients are given a wireless scale and pedometer and communication apps.
Omada runs a behavior change program for people at risk of obesity related to chronic disease. It uses AI to leverage a health coach work force. “The way we think about is to build a system that finds the most effective balance between human and artificial intelligence,” Williams said. The system is set up to test when AI vs. human intervention is most effective, and the company’s goal over the next three to five years is to find that balance.
Health Catalyst’s Just said the barriers to using machine learning in health systems are artificial. “The notion that you need army of data scientists to do machine learning is misguided.” There are lots of free and open source resources available now. In fact, Health Catalyst has made its machine learning resources, a collection of R and Python modules, available as open source format at healthcare.ai. “We felt the industry is not moving fast enough, and a rising tide lifts all boats,” Just said. “We think getting more people more familiar with the technology will be better for everyone.”
Health Catalyst has a core team of five data scientists and a total team of about 30 people being trained in machine learning. The most difficult aspect in the provider space is understanding the data, Just said. If you are going to make effective use of machine learning and AI, you have to be used to using data for improvement. “Machine learning is not something you can bring to an organization that is not used to using its data.”
One thing limiting the application of machine learning in healthcare is the quality of data and the lack of data sharing, Just said. “As we figure out data sharing and issues around genomic data, there is a bright future for deep learning in healthcare.”