Paper
8 March 2019 Robust video-frame classification for bronchoscopy
Author Affiliations +
Abstract
During bronchoscopy, a physician uses the endobronchial video to help navigate and observe the inner airways of a patient's lungs for lung cancer assessment. After the procedure is completed, the video typically contains a significant number of uninformative frames. A video frame is uninformative when it is too dark, too blurry, or indistinguishable due to a build-up of mucus, blood, or water within the airways. We develop a robust and automatic system, consisting of two distinct approaches, to classify each frame in an endobronchial video sequence as informative or uninformative. Our first approach, referred as the Classifier Approach, focuses on using image-processing techniques and a support vector machine, while our second approach, the Deep-Learning Approach, draws upon a convolutional neural network for video frame classification. Using the Classifier Approach, we achieved an accuracy of 78.8%, a sensitivity of 93.9%, and a specificity of 62.8%. The Deep-Learning Approach, gave slightly improved performance, with an accuracy of 87.3%, a sensitivity of 87.1%, and a specificity of 87.6%.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Matthew I. McTaggart and William E. Higgins "Robust video-frame classification for bronchoscopy", Proc. SPIE 10951, Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, 109511Q (8 March 2019); https://doi.org/10.1117/12.2507290
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Bronchoscopy

Video processing

Convolutional neural networks

Specular reflections

Lung cancer

Lung

Back to Top