Translator Disclaimer
21 January 2019 Video saliency detection based on low-level saliency fusion and saliency-aware geodesic
Author Affiliations +
We present a spatiotemporal saliency detection method for videos. In contrast to previous methods that focus on exploiting the different underlying saliency cues or ignore motion information, the proposed method aims to use both appearance information based on spatial edges and spatial color saliency and motion information based on temporal motion boundaries as indicators of foreground object locations. Spatial color saliency is obtained from the fusion of three color features: color edge connectivity, color rarity, and color compactness. Subsequently, we further smooth the color saliency to eliminate background noises and to further boost the detection accuracy. Then, we propose a strategy-based low-level saliency fusion that guarantees to complementarily employ the smoothed color saliency, spatial edges, and temporal motion boundaries clues toward producing high-accuracy low-level saliency. Subsequently, we generate framewise spatiotemporal saliency maps using a geodesic distance from the low-level saliency. Subsequently, high-quality results are obtained through the geodesic distance to the background area in the subsequent frames. Extensive quantitative and qualitative experiments on three public video datasets demonstrate the superiority of the proposed method over the state-of-the-art algorithms.
© 2019 SPIE and IS&T 1017-9909/2019/$25.00 © 2019 SPIE and IS&T
Weisheng Li, Siqin Feng, Hua-Ping Guan, Ziwei Zhan, and Cheng Gong "Video saliency detection based on low-level saliency fusion and saliency-aware geodesic," Journal of Electronic Imaging 28(1), 013009 (21 January 2019).
Received: 20 July 2018; Accepted: 19 December 2018; Published: 21 January 2019


Back to Top