Johns Hopkins Researchers Develop Image-Based Surgical Guidance System

Sept. 22, 2014
Johns Hopkins researchers have devised a computerized process that could make minimally invasive surgery more accurate and streamlined using equipment already common in the operating room.

Johns Hopkins researchers have devised a computerized process that could make minimally invasive surgery more accurate and streamlined using equipment already common in the operating room.

In a report published recently in the journal Physics in Medicine and Biology, the researchers say initial testing of the algorithm shows that their image-based guidance system is potentially superior to conventional tracking systems that have been the mainstay of surgical navigation over the last decade.

“Imaging in the operating room opens new possibilities for patient safety and high-precision surgical guidance,” Jeffrey Siewerdsen, Ph.D., a professor of biomedical engineering in the Johns Hopkins University School of Medicine, said in a statement that accompanied the report. “In this work, we devised an imaging method that could overcome traditional barriers in precision and workflow. Rather than adding complicated tracking systems and special markers to the already busy surgical scene, we realized a method in which the imaging system is the tracker and the patient is the marker.”

Siewerdsen explains that current state-of-the-art surgical navigation involves an often cumbersome process in which someone—usually a surgical technician, resident or fellow—manually matches points on the patient’s body to those in a preoperative CT image. This process, called registration, enables a computer to orient the image of the patient within the geometry of the operating room. “The registration process can be error-prone, require multiple manual attempts to achieve high accuracy and tends to degrade over the course of the operation,” Siewerdsen said.

Siewerdsen’s team used a mobile C-arm, already a piece of equipment used in many surgeries, to develop an alternative. They suspected that a fast, accurate registration algorithm could be devised to match two-dimensional X-ray images to the three-dimensional preoperative CT scan in a way that would be automatic and remain up to date throughout the operation.

“The breakthrough came when we discovered how much geometric information could be extracted from just one or two X-ray images of the patient,” said Ali Uneri, a graduate student in the department of computer science in the Johns Hopkins University Whiting School of Engineering. “From just a single frame, we achieved better than 3 millimeters of accuracy, and with two frames acquired with a small angular separation, we could provide surgical navigation more accurately than a conventional tracker.”

The team is translating the method to a system suitable for clinical studies. While the system could potentially be used in a wide range of procedures, Siewerdsen expects it to be most useful in minimally invasive surgeries, such as spinal and intracranial neurosurgery.

Read the source article at hopkinsmedicine.org

Sponsored Recommendations

How Digital Co-Pilots for patients help navigate care journeys to lower costs, increase profits, and improve patient outcomes

Discover how digital care journey platforms act as 'co-pilots' for patients, improving outcomes and reducing costs, while boosting profitability and patient satisfaction in this...

5 Strategies to Enhance Population Health with the ACG System

Explore five key ACG System features designed to amplify your population health program. Learn how to apply insights for targeted, effective care, improve overall health outcomes...

A 4-step plan for denial prevention

Denial prevention is a top priority in today’s revenue cycle. It’s also one area where most organizations fall behind. The good news? The technology and tactics to prevent denials...

Healthcare Industry Predictions 2024 and Beyond

The next five years are all about mastering generative AI — is the healthcare industry ready?