Presentation + Paper
29 March 2024 Synchronization and analysis of multimodal bronchoscopic airway exams for early lung cancer detection
Author Affiliations +
Abstract
Because lung cancer is the leading cause of cancer-related deaths globally, early disease detection is vital. To help with this issue, advances in bronchoscopy have brought about three complementary noninvasive video modalities for imaging early-stage bronchial lesions along the airway walls: white-light bronchoscopy (WLB), autofluorescence bronchoscopy (AFB), and narrow-band imaging (NBI). Recent research indicates that performing a multimodal airway exam — i.e., using the three modalities together — potentially enables a more robust disease assessment than any single modality. Unfortunately, to perform a multimodal exam, the physician must manually examine each modality’s video stream separately and then mentally correlate lesion observations. This process is not only extremely tedious and skill-dependent, but also poses the risk of missed lesions, thereby reducing diagnostic confidence. What is needed is a methodology and set of tools for easily leveraging the complementary information offered by these modalities. To address this need, we propose a framework for video synchronization and fusion tailored to multimodal bronchoscopic airway examination. Our framework, built into an interactive graphical system, entails a three-step process. First, for each of the three airway exams performed with a given bronchoscopic modality, several key airway video-frame landmarks are noted with respect to the patient’s CT-based 3D airway tree model (CT = computed tomography), where the airway tree model serves as a reference space for the entire process. These landmarks create a set of connections between the videos and the airway tree to facilitate subsequent fine registration. Second, the landmark set, along with a set of additional video frames, which either contain detected lesions flagged by two deep-learning-based detection networks or lie between landmarks to help fill surface gaps, are finely registered to the airway tree, using a CT-video-based global registration method. Lastly, the registered frames are mapped and fused, via texture mapping, to the CT-based 3D airway tree’s endoluminal surface. This enables sequential revising of synchronized multimodal surface structure and lesion locations through interactive graphical tools along a path navigating the airway tree. Results with patient multimodal bronchoscopic airway exams show the promise of our methods.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Qi Chang, Vahid Daneshpajooh, Patrick D. Byrnes, Danish Ahmad, Jennifer Toth, Rebecca Bascom, and William E. Higgins "Synchronization and analysis of multimodal bronchoscopic airway exams for early lung cancer detection", Proc. SPIE 12928, Medical Imaging 2024: Image-Guided Procedures, Robotic Interventions, and Modeling, 129281B (29 March 2024); https://doi.org/10.1117/12.3004212
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

3D modeling

Bronchoscopy

Video processing

Lung cancer

Cancer detection

Data modeling

RELATED CONTENT

Construction of a multimodal CT-video chest model
Proceedings of SPIE (March 12 2014)
A system for endobronchial video analysis
Proceedings of SPIE (March 03 2017)
Immersive video
Proceedings of SPIE (March 08 1996)

Back to Top