Machine learning techniques generate clinical labels of medical scans

Feb. 1, 2018

Researchers used machine learning techniques, including natural language processing algorithms, to identify clinical concepts in radiologist reports for CT scans, according to a study conducted at the Icahn School of Medicine at Mount Sinai and published in the journal Radiology. The technology is an important first step in the development of artificial intelligence that could interpret scans and diagnose conditions.

From an ATM reading handwriting on a check to Facebook suggesting a photo tag for a friend, computer vision powered by artificial intelligence is increasingly common in daily life. Artificial intelligence could one day help radiologists interpret X-rays, computed tomography (CT) scans, and magnetic resonance imaging (MRI) studies. But for the technology to be effective in the medical arena, computer software must be “taught” the difference between a normal study and abnormal findings.

This study aimed to train this technology how to understand text reports written by radiologists. Researchers created a series of algorithms to teach the computer clusters of phrases. Examples of terminology included words like phospholipid, heartburn, and colonoscopy.

Researchers trained the computer software using 96,303 radiologist reports associated with head CT scans performed at The Mount Sinai Hospital and Mount Sinai Queens between 2010 and 2016. To characterize the “lexical complexity” of radiologist reports, researchers calculated metrics that reflected the variety of language used in these reports and compared these to other large collections of text: Thousands of books, Reuters news stories, inpatient physician notes, and Amazon product reviews.

Deep learning describes a subcategory of machine learning that uses multiple layers of neural networks (computer systems that learn progressively) to perform inference, requiring large amounts of training data to achieve high accuracy. Techniques used in this study led to an accuracy of 91%, demonstrating that it is possible to automatically identify concepts in text from the complex domain of radiology.

Researchers at Boston University and Verily Life Sciences collaborated on the study.

Mount Sinai has the full release

Sponsored Recommendations

The Race to Replace POTS Lines: Keeping Your People and Facilities Safe

Don't wait until it's too late—join our webinar to learn how healthcare organizations are racing to replace obsolete POTS lines, ensuring compliance, reducing liability, and maintaining...

Transform Care Team Operations & Enhance Patient Care

Discover how to overcome key challenges and enhance patient care in our upcoming webinar on September 26. Learn how innovative technologies and strategies can transform care team...

Prior Authorization in Healthcare: Why Now?

Prepare your organization for the CMS 2027 mandate on prior authorization via API. Join our webinar to explore investment insights, real-time data exchange, and the benefits of...

Securing Remote Radiology with the Zero Trust Exchange

Discover how the Zero Trust Exchange is transforming remote radiology security. This video delves into innovative solutions that protect sensitive patient data, ensuring robust...