Translator Disclaimer
6 August 2019 Superpixel-based video saliency detection via the fusion of spatiotemporal saliency and temporal coherency
Author Affiliations +
Abstract

We advocate a model to effectively detect salient objects in various videos; the proposed framework [spatiotemporal saliency and coherency, (STSC)] consists of two modules, for capturing the spatiotemporal saliency and the temporal coherency information in the superpixel domain, respectively. We first extract the most straightforward gradient contrasts (such as the color gradient and motion gradient) as the low-level features to compute the high-level spatiotemporal gradient features, and the spatiotemporal saliency is obtained by computing the average weighted geodesic distance among these features. The temporal coherency, which is measured by the motion entropy, is then used to eliminate the false foreground superpixels that result from inaccurate optical flow and confusable appearance. Finally, the two discriminative video saliency indicators are combined to identify the salient regions. Extensive quantitative and qualitative experiments on four public datasets (FBMS, DAVIS, SegtrackV2, and ViSal dataset) demonstrate the superiority of the proposed method over the current state-of-the-art methods.

© 2019 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2019/$28.00 © 2019 SPIE
Yandi Li, Xiping Xu, Ning Zhang, and Enyu Du "Superpixel-based video saliency detection via the fusion of spatiotemporal saliency and temporal coherency," Optical Engineering 58(8), 083101 (6 August 2019). https://doi.org/10.1117/1.OE.58.8.083101
Received: 11 March 2019; Accepted: 17 July 2019; Published: 6 August 2019
JOURNAL ARTICLE
11 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT


Back to Top