19 June 2014 Robust background modeling for enhancing object tracking in video
Author Affiliations +
Abstract
Automated event recognition in video data has numerous practical applications. The ability to recognize events in practice depends on accurate tracking of objects in the video data. Scene complexity has a large effect on tracker performance. Background models can address this problem by providing a good estimate of the image region surrounding the object of interest. However, the utility of the background model depends on accurately representing current imaging conditions. Changing imaging conditions, such as lighting and weather, render the background model inaccurate, degrading the tracker performance. As a preprocessing step, developing a set of robust background models can substantially improve system performance. We present an approach to robustly modeling the background as a function of the data acquisition conditions. We will describe the formulation of these models and discuss model selection in the context of real-time processing. Using results from a recent experiment, we demonstrate empirically the performance benefits from using the robust background modeling.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Richard J. Wood, Richard J. Wood, David Reed, David Reed, Janet Lepanto, Janet Lepanto, John M. Irvine, John M. Irvine, } "Robust background modeling for enhancing object tracking in video", Proc. SPIE 9089, Geospatial InfoFusion and Video Analytics IV; and Motion Imagery for ISR and Situational Awareness II, 908902 (19 June 2014); doi: 10.1117/12.2047258; https://doi.org/10.1117/12.2047258
PROCEEDINGS
9 PAGES


SHARE
Back to Top