A system that can automatically annotate surveillance video in a manner useful for locating a person with a given description of clothing is presented. Each human is annotated based on two appearance features: primary colors of clothes and the presence of text/logos on clothes. The annotation occurs after a robust foreground extraction stage employing a modified Gaussian mixture model-based approach. The proposed pipeline consists of a preprocessing stage where color appearance of an image is improved using a color constancy algorithm. In order to annotate color information for human clothes, we use the color histogram feature in HSV space and find local maxima to extract dominant colors for different parts of a segmented human object. To detect text/logos on clothes, we begin with the extraction of connected components of enhanced horizontal, vertical, and diagonal edges in the frames. These candidate regions are classified as text or nontext on the basis of their local energy-based shape histogram features. Further, to detect humans, a novel technique has been proposed that uses contourlet transform-based local binary pattern (CLBP) features. In the proposed method, we extract the uniform direction invariant LBP feature descriptor for contourlet transformed high-pass subimages from vertical and diagonal directional bands. In the final stage, extracted CLBP descriptors are classified by a trained support vector machine. Experimental results illustrate the superiority of our method on large-scale surveillance video data.
In this paper we present a system which is focused on developing algorithms for automatic annotation/articulation of humans passing through a surveillance camera in a way useful for describing a person/criminal by a crime scene witness. Each human is articulated/annotated based on two appearance features: 1. primary colors of clothes in the head, body and legs region. 2. presence of text/logo on the clothes. The annotation occurs after a robust foreground extraction based on a modified approach to Gaussian Mixture model and detection of human from segmented foreground images. The proposed pipeline consists of a preprocessing stage where we improve color quality of images using a basic color constancy algorithm and further improve the results using a proposed post-processing method. The results show a significant improvement to the illumination of the video frames. In order to annotate color information for human clothes, we apply 3D Histogram analysis (with respect to Hue, Saturation and Value) on HSV converted image regions of human body parts along with extrema detection and thresholding to decide the dominant color of the region. In order to detect text/logo on the clothes as another feature to articulate humans, we begin with the extraction of connected components of enhanced horizontal, vertical and diagonal edges in the frames. These candidate regions are classified as text or non-text on the bases of their Local Energy based Shape Histogram (LESH) features combined with KL divergence as classification criteria. To detect humans, a novel technique has been proposed that uses a combination of Histogram of Oriented Gradients (HOG) and Contourlet transform based Local Binary Patterns (LBP) with Adaboost as classifier. Initial screening of foreground objects is performed by using HOG features. To further eliminate the false positives due to noise form background and improve results, we apply Contourlet-LBP feature extraction on the images. In the proposed method, we extract the LBP feature descriptor for Contourlet transformed high pass sub-images from vertical and diagonal directional bands. In the final stage, extracted Contourlet-LBP descriptors are applied to Adaboost for classification. The proposed frame work showed fairly fine performance when tested on a CCTV test dataset.
The advances in automated production processes have resulted in the need for detecting, reading and decoding 2D
datamatrix barcodes at very high speeds. This requires the correct combination of high speed optical devices that are
capable of capturing high quality images and computer vision algorithms that can read and decode the barcodes
accurately. Such barcode readers should also be capable of resolving fundamental imaging challenges arising from
blurred barcode edges, reflections from possible polyethylene wrapping, poor and/or non-uniform illumination,
fluctuations of focus, rotation and scale changes. Addressing the above challenges in this paper we propose the design
and implementation of a high speed multi-barcode reader and provide test results from an industrial trial. To authors
knowledge such a comprehensive system has not been proposed and fully investigated in existing literature. To reduce the
reflections on the images caused due to polyethylene wrapping used in typical packaging, polarising filters have been
used. The images captured using the optical system above will still include imperfections and variations due to scale,
rotation, illumination etc. We use a number of novel image enhancement algorithms optimised for use with 2D datamatrix
barcodes for image de-blurring, contrast point and self-shadow removal using an affine transform based approach and
non-uniform illumination correction. The enhanced images are subsequently used for barcode detection and recognition.
We provide experimental results from a factory trial of using the multi-barcode reader and evaluate the performance of
each optical unit and computer vision algorithm used. The results indicate an overall accuracy of 99.6 % in barcode
recognition at typical speeds of industrial conveyor systems.
Automatic speaker identification in a videoconferencing environment will allow conference attendees to focus their
attention on the conference rather than having to be engaged manually in identifying which channel is active and who
may be the speaker within that channel. In this work we present a real-time, audio-coupled video based approach to
address this problem, but focus more on the video analysis side. The system is driven by the need for detecting a talking
human via the use of computer vision algorithms. The initial stage consists of a face detector which is subsequently
followed by a lip-localization algorithm that segments the lip region. A novel approach for lip movement detection based
on image registration and using the Coherent Point Drift (CPD) algorithm is proposed. Coherent Point Drift (CPD) is a
technique for rigid and non-rigid registration of point sets. We provide experimental results to analyse the performance
of the algorithm when used in monitoring real life videoconferencing data.
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle
recognitions systems that are solely based on Automatic Number Plate Recognition (ANPR) systems. Several vehicle
MMR systems have been proposed in literature. In parallel to this, the usefulness of multi-resolution based feature
analysis techniques leading to efficient object classification algorithms have received close attention from the research
community. To this effect, Contourlet transforms that can provide an efficient directional multi-resolution image
representation has recently been introduced. Already an attempt has been made in literature to use Curvelet/Contourlet
transforms in vehicle MMR. In this paper we propose a novel localized feature detection method in Contourlet transform
domain that is capable of increasing the classification rates up to 4%, as compared to the previously proposed Contourlet
based vehicle MMR approach in which the features are non-localized and thus results in sub-optimal classification.
Further we show that the proposed algorithm can achieve the increased classification accuracy of 96% at significantly
lower computational complexity due to the use of Two Dimensional Linear Discriminant Analysis (2DLDA) for
dimensionality reduction by preserving the features with high between-class variance and low inter-class variance.