14 December 2015 Improving video foreground segmentation and propagation through multifeature fusion
Author Affiliations +
Abstract
Video foreground segmentation lays the foundation for many high-level visual applications. However, how to dig up the effective features for foreground propagation and how to intelligently fuse the different information are still challenging problems. We aim to deal with the above-mentioned problems, and the goal is to accurately propagate the object across the rest of the frames given an initially labeled frame. Our contributions are summarized as follows: (1) we describe the object features with superpixel-based appearance and motion clues from both global and local viewpoints. Furthermore, the objective confidences for both the appearance and motion features are also introduced to balance the different clues. (2) All the features and their confidences are intelligently fused by the improved Dempster–Shafer evidence theory instead of the empirical parameters tuning used in many algorithms. Experimental results on the two well-known SegTrack and SegTrack v2 datasets demonstrate that our algorithm can yield high-quality segmentations.
© 2015 SPIE and IS&T
Xiaoliu Cheng, Xiaoliu Cheng, Yan Wang, Yan Wang, Xiaobing Yuan, Xiaobing Yuan, Baoqing Li, Baoqing Li, Yuanyuan Ding, Yuanyuan Ding, Zebin Zhang, Zebin Zhang, } "Improving video foreground segmentation and propagation through multifeature fusion," Journal of Electronic Imaging 24(6), 063017 (14 December 2015). https://doi.org/10.1117/1.JEI.24.6.063017 . Submission:
JOURNAL ARTICLE
12 PAGES


SHARE
Back to Top