24 March 2018 Visual saliency detection based on in-depth analysis of sparse representation
Author Affiliations +
Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.
© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE)
Xin Wang, Siqiu Shen, Chen Ning, "Visual saliency detection based on in-depth analysis of sparse representation," Optical Engineering 57(3), 033108 (24 March 2018). https://doi.org/10.1117/1.OE.57.3.033108 . Submission: Received: 12 October 2017; Accepted: 2 March 2018
Received: 12 October 2017; Accepted: 2 March 2018; Published: 24 March 2018

Back to Top