Recognizing a target object across non-overlapping distributed cameras is known in the computer vision community as the problem of person re-identification. In this paper, a multi-patch matching method for person reidentification is presented. Starting from the assumption that: the appearance (clothes) of a person does not change during the time of passing in different cameras field of view , which means the regions with the same color in target image will be identical while crossing cameras. First, we extract distinctive features in the training procedure, where each image target is devised into small patches, the SIFT features and LAB color histograms are computed for each patch. Then we use the KNN approach to detect group of patches with high similarity in the target image and then we use a bi-directional weighted group matching mechanism for the re-identification. Experiments on a challenging VIPeR dataset show that the performances of the proposed method outperform several baselines and state of the art approaches.
This paper analyses the behavior of some existing background subtraction algorithms for possible use in automated video surveillance applications. The performance of the analyzed algorithms has been demonstrated by their authors on a selected video sequences to show the merits of their approaches. Nevertheless, choosing an adequate approach for a given application is not an easy task. In this study; by using background subtraction evaluation metrics combined with visual inspection, we asses in deep the performance of 04 algorithms under a variety of video surveillance challenges. This experimental analysis highlights the advantages and the limitations of each approach and helps in choosing the suitable method for a given video surveillance scenario.
Object tracking is an important part in surveillance systems, One of the algorithms used for this task is the meanshift
algorithm due to the robustness, computational efficiency and implementation ease. However the traditional meanshift
cannot effectively track the moving object when the scale changes, because of the fixed size of the tracking
window, and can lose the target while an occlusion, In this study a method based on the trajectory direction of the
moving object is presented to deal with the problem of scale change. Furthermore a histogram similarity metric is used to
detect when target occlusion occurs, and a method based on multi kernel is proposed, to estimate which part is not in
occlusion and this part will be used to extrapolate the motion of the object and gives an estimation of its position,
Experimental results show that the improved methods have a good adaptability to the scale and occlusion of the target.
The background subtraction could be presented as classification process when investigating the upcoming frames in a video stream, taking in consideration in some cases: a temporal information, in other cases the spatial consistency, and these past years both of the considerations above. The classification often relied in most of the cases on a fixed threshold value. In this paper, a framework for background subtraction and moving object detection based on adaptive threshold measure and short/long frame differencing procedure is proposed. The presented framework explored the case of adaptive threshold using mean squared differences for a sampled background model. In addition, an intuitive update policy which is neither conservative nor blind is presented. The algorithm succeeded on extracting the moving foreground and isolating an accurate background.