14 December 2015 Improving video foreground segmentation and propagation through multifeature fusion
Xiaoliu Cheng, Yan Wang, Xiaobing Yuan, Baoqing Li, Yuanyuan Ding, Zebin Zhang
Author Affiliations +
Abstract
Video foreground segmentation lays the foundation for many high-level visual applications. However, how to dig up the effective features for foreground propagation and how to intelligently fuse the different information are still challenging problems. We aim to deal with the above-mentioned problems, and the goal is to accurately propagate the object across the rest of the frames given an initially labeled frame. Our contributions are summarized as follows: (1) we describe the object features with superpixel-based appearance and motion clues from both global and local viewpoints. Furthermore, the objective confidences for both the appearance and motion features are also introduced to balance the different clues. (2) All the features and their confidences are intelligently fused by the improved Dempster–Shafer evidence theory instead of the empirical parameters tuning used in many algorithms. Experimental results on the two well-known SegTrack and SegTrack v2 datasets demonstrate that our algorithm can yield high-quality segmentations.
© 2015 SPIE and IS&T 1017-9909/2015/$25.00 © 2015 SPIE and IS&T
Xiaoliu Cheng, Yan Wang, Xiaobing Yuan, Baoqing Li, Yuanyuan Ding, and Zebin Zhang "Improving video foreground segmentation and propagation through multifeature fusion," Journal of Electronic Imaging 24(6), 063017 (14 December 2015). https://doi.org/10.1117/1.JEI.24.6.063017
Published: 14 December 2015
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Image segmentation

Optical flow

Motion models

Binary data

Fusion energy

Algorithms

RELATED CONTENT


Back to Top