12 March 2018 Generative adversarial networks for specular highlight removal in endoscopic images
Author Affiliations +
Abstract
Providing the surgeon with the right assistance at the right time during minimally-invasive surgery requires computer-assisted surgery systems to perceive and understand the current surgical scene. This can be achieved by analyzing the endoscopic image stream. However, endoscopic images often contain artifacts, such as specular highlights, which can hinder further processing steps, e.g., stereo reconstruction, image segmentation, and visual instrument tracking. Hence, correcting them is a necessary preprocessing step. In this paper, we propose a machine learning approach for automatic specular highlight removal from a single endoscopic image. We train a residual convolutional neural network (CNN) to localize and remove specular highlights in endoscopic images using weakly labeled data. The labels merely indicate whether an image does or does not contain a specular highlight. To train the CNN, we employ a generative adversarial network (GAN), which introduces an adversary to judge the performance of the CNN during training. We extend this approach by (1) adding a self-regularization loss to reduce image modification in non-specular areas and by (2) including a further network to automatically generate paired training data from which the CNN can learn. A comparative evaluation shows that our approach outperforms model-based methods for specular highlight removal in endoscopic images.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Isabel Funke, Isabel Funke, Sebastian Bodenstedt, Sebastian Bodenstedt, Carina Riediger, Carina Riediger, Jürgen Weitz, Jürgen Weitz, Stefanie Speidel, Stefanie Speidel, } "Generative adversarial networks for specular highlight removal in endoscopic images", Proc. SPIE 10576, Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling, 1057604 (12 March 2018); doi: 10.1117/12.2293755; https://doi.org/10.1117/12.2293755
PROCEEDINGS
9 PAGES + PRESENTATION

SHARE
Back to Top