Human skin detection is a computer vision problem that has been widely researched in color images. In this article we deal with this task as an interactive segmentation problem in hyperspectral outdoor images. We have focused on the problem of skin identification in hyperspectral cameras allowing a fine sampling of the light spectrum, so that the information gathered at each pixel is a high dimensional vector. The problem is treated as a classification problem, where we make use of active learning strategies to provide an interactive robust solution reaching high accuracy in a short training/testing cycle.
Content based image retrieval (CBIR) systems are database management systems that emply features extracted from the image as the indices used in the search of the database. Images are retrieved on the basis of the similarity with the query image. Indexing hyperspectral images is a special case of CBIR, with the added complexity of the high dimensionality of the pixels. We propose the use of endmembers as the hyperspectral image characterization. We thus define a similarity measure between hyperspectral images based on these image endmembers. The endmembers must be induced from the image data in order to automate the process. Enmembers can be assumed to be morphologically independent, a notion originally introduced to study the noise robustnes of Morphological Networks. For this induction we use Associative Morphological Memories (AMM) as detectors of Morphological Independence conditions.
We test a procedure for endmember extraction on a synthetic hyperspectral image. The procedure uses the Autoassociative Morphological Memories (AMM) as detectors of morphological independence conditions. To validate it we apply Convex Cone Analysis (CCA) to the same data. To generate the validation data, we synthesize the ground truth abundance images as the simulation of gaussian random fields and we use as ground truth endmembers some reflectance spectra obtained from the USGS repository.
Morphological Neural Networks (MNN) have been proposed as an alternative neural computation paradigm. In this paper we explore the potential of Heteroassociative MNN (HMNN) for a vision based practical task, that of self-localization in a vision-based navigation framework for mobile robots. HMNN have a big potential for real time application because its recall process is very fast. We present some experimental results that illustrate the proposed approach.
We study the application of Competitive Neural Networks (CNN) to the Unsupervised analysis of Remote Sensing Hyperspectral images. CNN are applied as clustering algorithms at the pixel level. We propose their use for the extraction of endmembers and evaluate them through the error induced by the compression/decompression with the CNN in the supervised classification of the images. We show results with the Self Organizing Map and Neural Gas applied to a well known case study.
We propose the computation of the color palette of each image in isolation, using Vector Quantization methods. The image features are, then, the color palette and the histogram of the color quantization of the image with this color palette. We propose as a measure of similitude the weighted sum of the differences between the color palettes and the corresponding histograms. This approach allows the increase of the database without the recomputation of the image features and without substantial loss of discriminative power.
In this paper we prose the application of the codebook computed by the Self Organizing Map as a smoothing filter, the QV Bayesian Filter, for the preprocessing of the image sequences. The optical flow is then robustly and efficiently computed over the filtered imags applying a correlation approach at the pixel level.
The purpose of our works is to provide fast and reliable face localization techniques in real time an in real-life scenes. Person localization is included in this problem. The end application sought is the ability of mobile robots to navigate in human populated environments, and to start visual interaction with them. Known methods are computationally intensive, far from real time implementation at near future processing power of off-the-shelf processors. Our technique is based in motion segmentation, signature analysis and color processing. Signature analysis provides fast hints of the person and face localization. Color processing is used to confirm the face hypothesis, and it is based on our works on adaptive color quantization of image sequences. The technique can be implemented in real time and combined with other approaches to enhance the recognition results.