11 September 2015 Sparsity-guided saliency detection for remote sensing images
Author Affiliations +
J. of Applied Remote Sensing, 9(1), 095055 (2015). doi:10.1117/1.JRS.9.095055
Traditional saliency detection can effectively detect possible objects using an attentional mechanism instead of automatic object detection, and thus is widely used in natural scene detection. However, it may fail to extract salient objects accurately from remote sensing images, which have their own characteristics such as large data volumes, multiple resolutions, illumination variation, and complex texture structure. We propose a sparsity-guided saliency detection model for remote sensing images that uses a sparse representation to obtain the high-level global and background cues for saliency map integration. Specifically, it first uses pixel-level global cues and background prior information to construct two dictionaries that are used to characterize the global and background properties of remote sensing images. It then employs a sparse representation for the high-level cues. Finally, a Bayesian formula is applied to integrate the saliency maps generated by both types of high-level cues. Experimental results on remote sensing image datasets that include various objects under complex conditions demonstrate the effectiveness and feasibility of the proposed method.
Zhao, Wang, Shi, and Jiang: Sparsity-guided saliency detection for remote sensing images
Danpei Zhao, Jiajia Wang, Jun Shi, Zhiguo Jiang, "Sparsity-guided saliency detection for remote sensing images," Journal of Applied Remote Sensing 9(1), 095055 (11 September 2015). http://dx.doi.org/10.1117/1.JRS.9.095055

Remote sensing

Associative arrays

Visual process modeling

RGB color model


Data modeling


Back to Top