Open Access
23 April 2015 Improving video foreground segmentation with an object-like pool
Xiaoliu Cheng, Wei Lv, Huawei Liu, Xing You, Baoqing Li, Xiaobing Yuan
Author Affiliations +
Abstract
Foreground segmentation in video frames is quite valuable for object and activity recognition, while the existing approaches often demand training data or initial annotation, which is expensive and inconvenient. We propose an automatic and unsupervised method of foreground segmentation given an unlabeled and short video. The pixel-level optical flow and binary mask features are converted into the normal probabilistic superpixels, therefore, they are adaptable to build the superpixel-level conditional random field which aims to label the foreground and background. We exploit the fact that the appearance and motion features of the moving object are temporally and spatially coherent in general, to construct an object-like pool and background-like pool via the previous segmented results. The continuously updated pools can be regarded as the “prior” knowledge of the current frame to provide a reliable way to learn the features of the object. Experimental results demonstrate that our approach exceeds the current methods, both qualitatively and quantitatively.
CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Xiaoliu Cheng, Wei Lv, Huawei Liu, Xing You, Baoqing Li, and Xiaobing Yuan "Improving video foreground segmentation with an object-like pool," Journal of Electronic Imaging 24(2), 023034 (23 April 2015). https://doi.org/10.1117/1.JEI.24.2.023034
Published: 23 April 2015
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Binary data

Optical flow

Video

Image segmentation

Video surveillance

Visualization

Spatial coherence

Back to Top