5 August 2016 Robust video object tracking via Bayesian model averaging-based feature fusion
Yi Dai, Bin Liu
Author Affiliations +
Abstract
We are concerned with tracking an object of interest in a video stream. We propose an algorithm that is robust against occlusion, the presence of confusing colors, abrupt changes in the object features and changes in scale. We develop the algorithm within a Bayesian modeling framework. The state-space model is used for capturing the temporal correlation in the sequence of frame images by modeling the underlying dynamics of the tracking system. The Bayesian model averaging (BMA) strategy is proposed for fusing multiclue information in the observations. Any number of object features is allowed to be involved in the proposed framework. Every feature represents one source of information to be fused and is associated with an observation model. The state inference is performed by employing the particle filter methods. In comparison with the related approaches, the BMA-based tracker is shown to have robustness, expressivity, and comprehensibility.
© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2016/$25.00 © 2016 SPIE
Yi Dai and Bin Liu "Robust video object tracking via Bayesian model averaging-based feature fusion," Optical Engineering 55(8), 083102 (5 August 2016). https://doi.org/10.1117/1.OE.55.8.083102
Published: 5 August 2016
Lens.org Logo
CITATIONS
Cited by 27 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Optical tracking

Detection and tracking algorithms

Video

Visual process modeling

Video surveillance

Optical engineering

RGB color model

RELATED CONTENT

Adaptive and accelerated tracking-learning-detection
Proceedings of SPIE (August 21 2013)

Back to Top