In many tasks of machine vision applications, it is important that recorded colors remain constant, in the real world scene, even under changes of the illuminants and the cameras. Contrary to the human vision system, a machine vision system exhibits inadequate adaptability to the variation of lighting conditions. Automatic white balance control available in commercial cameras is not sufficient to provide reproducible color classification. We address this problem of color constancy on a large image database acquired with varying digital cameras and lighting conditions. A device-independent color representation may be obtained by applying a chromatic adaptation transform, from a calibrated color checker pattern included in the field of view. Instead of using the standard Macbeth color checker, we suggest selecting judicious colors to design a customized pattern from contextual information. A comparative study demonstrates that this approach ensures a stronger constancy of the colors-of-interest before vision control thus enabling a wide variety of applications.
In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.
This work is part of the ESCALE project dedicated to the design of a complete 3D and color wound assessment tool
using a simple hand held digital camera. The first part was concerned with the computation of a 3D model for wound
measurements using uncalibrated vision techniques. This article presents the second part, which deals with color
classification of wound tissues, a prior step before combining shape and color analysis in a single tool for real tissue
surface measurements. We have adopted an original approach based on unsupervised segmentation prior to
classification, to improve the robustness of the labelling stage. A database of different tissue types is first built; a simple
but efficient color correction method is applied to reduce color shifts due to uncontrolled lighting conditions. A ground
truth is provided by the fusion of several clinicians manual labellings. Then, color and texture tissue descriptors are
extracted from tissue regions of the images database, for the learning stage of an SVM region classifier with the aid of a
ground truth resulting from. The output of this classifier provides a prediction model, later used to label the segmented
regions of the database. Finally, we apply unsupervised color region segmentation on wound images and classify the
tissue regions. Compared to the ground truth, the result of automatic segmentation driven classification provides an
overlap score, (66 % to 88%) of tissue regions higher than that obtained by clinicians.