We propose a stable scale-adaptive tracking method that uses the centroids of the target colors in the target localization and scale adaptation. Because of the spatial information inherent in the centroids, a direct relationship can be established between the centroids and the scale of the target region. After the zooming factors are calculated, the unreliable zooming factors are filtered out to produce a reliable zooming factor that determines the new scale of the target.
In this paper, we propose a stable illumination change adaptive tracking algorithm in which the color model update and the target localization are mutually constrained to each other. The mutual constraint is the result of sharing the same five-dimensional feature space, where the feature vector is composed of the three color components and the x,y coordinates of pixels inside the target region. The use of the five-dimensional feature vector introduces spatiality in the color model update, i.e., the re-clustering of the feature space is constrained by the spatial location of the pixels corresponding to the target color. The spatial constraint on the color model update is also due to the use of a window which contains the pixels used in the color model update, where the location of the window is decided by the current target location. The color model update is performed by a five-dimensional mean shift-based clustering algorithm. The update in the color components in the five-dimensional feature vector handles the illumination change in the colors in the target region. The five-dimensional mean shift-based clustering updates also the x,y coordinates in the five-dimensional vector, and thus constrains the target localization. The target localization uses the x,y coordinates of the mean vectors of the five-dimensional vectors corresponding to each color bin. The target location is computed as a minimizer of a proposed energy functional, which is proposed such that the relative spatial locations of the mean vectors are considered in the target localization. The mutual constraint of the color model update and the target localization makes the tracking stable under various situations such as global illumination change, partial occlusion, and cluttered background. It can stably track the target even in the appearance of major background colors that are also major colors of the target object.
We propose a chromatic aberration (CA) reduction technique that removes artifacts caused by lateral CA and longitudinal CA, simultaneously. In general, most visible CA-related artifacts appear locally in the neighborhoods of strong edges. Because these artifacts usually have local characteristics, they cannot be removed well by regular global warping methods. Therefore, we designed a nonlinear partial differential equation (PDE) in which the local characteristics of the CA are taken into account. The proposed algorithm estimates the regions with apparent CA artifacts and the ratios of the magnitudes between the color channels. Using this information, the proposed PDE matches the gradients of the edges in the red and blue channels to the gradient in the green channel, which results in an alignment of the positions of the edges while simultaneously performing a deblurring process on the edges. Experimental results show that the proposed method can effectively remove even significant CA artifacts, such as purple fringing as identified by the image sensor. The experimental results show that the proposed algorithm achieves better performance than existing algorithms.
In this paper, we propose a level set based object detection method for video surveillance which provides for
a robust and real-time working object detection under various global illumination conditions. The proposed
scheme needs no manual parameter settings for different illumination conditions, which makes the algorithm
applicable to automatic surveillance systems. Two special filters are designed to eliminate the spurious object
regions that occur due to the CCD noise, making the scheme stable even in very low illumination conditions. We
demonstrate the effectiveness of the proposed algorithm experimentally with different illumination conditions,
change of contrast, and noise level.
In this paper, we propose a new representation of the location of the target that aims for the application of object tracking with non-stationary cameras and non-rigid motion: the area weighted mean of the centroids corresponding to each color bin of the target. With this representation, the target localization in the next frame can be achieved by a direct one step computation. The tracking based on this representation has several advantages such as being possible to track in low-rate-frame environment, allowing partial occlusion and being fast due to the one step computation. We also propose a background feature elimination algorithm which is based on the level set based bimodal segmentation and is incorporated into the tracking scheme to increase the robustness of the scheme.