29 August 2016 Video co-saliency detection
Author Affiliations +
Proceedings Volume 10033, Eighth International Conference on Digital Image Processing (ICDIP 2016); 100335G (2016) https://doi.org/10.1117/12.2245113
Event: Eighth International Conference on Digital Image Processing (ICDIP 2016), 2016, Chengu, China
Abstract
In this paper, a novel co-saliency model is proposed with the aim to detect co-salient objects in multiple videos. On the basis of superpixel segmentation results, we fuse the temporal saliency and spatial saliency with a superpixel-level object prior to generate the intra saliency map for each video frame. Then the video-level global object/background histogram is calculated for each video based on the adaptive thresholding results of intra saliency maps, and the seed saliency maps are generated by using similarity measures between superpixels and the global object/background histogram. Finally, the co-saliency maps are generated by the recovery process from the seed saliency measures to all regions in each video frame. Experimental results on a public video dataset show that the proposed video co-saliency model consistently outperforms the state-of-the-art video saliency model and image co-saliency models.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yufeng Xie, Yufeng Xie, Linwei Ye, Linwei Ye, Zhi Liu, Zhi Liu, Xuemei Zou, Xuemei Zou, "Video co-saliency detection", Proc. SPIE 10033, Eighth International Conference on Digital Image Processing (ICDIP 2016), 100335G (29 August 2016); doi: 10.1117/12.2245113; https://doi.org/10.1117/12.2245113
PROCEEDINGS
6 PAGES


SHARE
RELATED CONTENT

Robust visual tracking via spatio-temporal cue integration
Proceedings of SPIE (January 09 2014)
Indexing in video databases
Proceedings of SPIE (March 22 1995)

Back to Top