In this paper we present a new method for object tracking initialization using background subtraction. We
propose an effective scheme for updating a background model adaptively in dynamic scenes. Unlike the traditional
methods that use the same "learning rate" for the entire frame or sequence, our method assigns a learning rate
for each pixel according to two parameters. The first parameter depends on the difference between the pixel
intensities of the background model and the current frame. The second parameter depends on the duration
of the pixel being classified as a background pixel. We also introduce a method to detect sudden illumination
changes and segment moving objects during these changes. Experimental results show significant improvements
in moving object detection in dynamic scenes such as waving tree leaves and sudden illumination change, and it
has a much lower computational cost compared to Gaussian mixture model.
In this paper we present new methods for object tracking initialization using automated moving object detection
based on background subtraction. The new methods are integrated into the real-time object tracking system
we previously proposed. Our proposed new background model updating method and adaptive thresholding are
used to produce a foreground object mask for object tracking initialization.
Traditional background subtraction method detects moving objects by subtracting the background model
from the current image. Compare to other common moving object detection algorithms, background subtraction
segments foreground objects more accurately and detects foreground objects even if they are motionless. However,
one drawback of traditional background subtraction is that it is susceptible to environmental changes, for
example, gradual or sudden illumination changes. The reason of this drawback is that it assumes a static background,
and hence a background model update is required for dynamic backgrounds. The major challenges then
are how to update the background model, and how to determine the threshold for classification of foreground and
background pixels. We proposed a method to determine the threshold automatically and dynamically depending
on the intensities of the pixels in the current frame and a method to update the background model with learning
rate depending on the differences of the pixels in the background model and the previous frame.
This paper presents new methods for effi9;cient object tracking in video sequences using multiple features and
particle filtering. A histogram-based framework is used to describe the features. Histograms are useful because
have the property that they allow changes in the object appearance while the histograms remain the same.
Particle filtering is used because it is very robust for non-linear and non-Gaussian dynamic state estimation
problems and performs well when clutter and occlusions are present. Color histogram based particle filtering is
the most common method used for object tracking. However, a single feature tracker loses track easily and can
track the wrong object. One popular remedy for this problem is using multiple features. It has been shown that
using multiple features for tracking provides more accurate results while increasing the computational complexity.
In this paper we address these problems by describing an efficient method for histogram computation. For better
tracking performance we also introduce a new observation likelihood model with dynamic parameter setting.
Experiments show our proposed method is more accurate and more efficient then the traditional color histogram
based particle filtering.
In this paper, we investigate spatial and temporal models for texture analysis and synthesis. The goal is to use
these models to increase the coding efficiency for video sequences containing textures. The models are used to
segment texture regions in a frame at the encoder and synthesize the textures at the decoder. These methods
can be incorporated into a conventional video coder (e.g. H.264) where the regions to be modeled by the textures
are not coded in a usual manner but texture model parameters are sent to the decoder as side information. We
showed that this approach can reduce the data rate by as much as 15%.