Translator Disclaimer
Paper
3 January 2020 Saliency detection based on non-local neural networks in low-contrast images
Author Affiliations +
Proceedings Volume 11373, Eleventh International Conference on Graphics and Image Processing (ICGIP 2019); 113730G (2020) https://doi.org/10.1117/12.2557743
Event: Eleventh International Conference on Graphics and Image Processing, 2019, Hangzhou, China
Abstract
Saliency detection model has been widely used in many fields of computer vision. Currently, most models are not applicable to the foggy scenario. Because these models are highly dependent upon high-level features extracted by deep learning and handcrafted features, these features cannot effectively highlight the significant targets in foggy images. In this paper, we present a saliency detection model for foggy images. This model extracts non-local feature, i.e., jointly learned with local features under a unified deep learning framework. The key idea of the proposed model is to hierarchically introduce non-local module with local contrast processing blocks, aiming to provide robust representation of saliency information towards foggy images with low signal-to-noise ratio property. Experiments have been conducted on three challenging datasets and our foggy image dataset consisting of dynamic object images. By comparing with the state-of-the-art models, our model gets better performance.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Huimin Zou, Juanjuan He, Song Xiang, and Ziqi Zhu "Saliency detection based on non-local neural networks in low-contrast images", Proc. SPIE 11373, Eleventh International Conference on Graphics and Image Processing (ICGIP 2019), 113730G (3 January 2020); https://doi.org/10.1117/12.2557743
PROCEEDINGS
10 PAGES


SHARE
Advertisement
Advertisement
Back to Top