Color-texture image segmentation remains a challenging problem due to extensive color-texture variability. Thus, the limited prior knowledge that is expressed by pairwise constraints can be exploited to guide the segmentation process. We propose a new semisupervised method by combining constrained feature selection and spectral clustering (SC) to perform color-texture image segmentation. The pairwise constraints are used by the constraint feature selection to choose the most relevant features among an available set of color and texture features. For this purpose, an innovative constraint score is developed to evaluate a subset of features at one time. A specific constrained SC algorithm involving the pairwise constraints is then applied to regroup the pixels into clusters. Experimental results on four benchmark datasets show that the proposed constraint score outperforms the main state-of-the-art constraint scores and that our semisupervised segmentation method is competitive compared with supervised, semisupervised, and unsupervised state-of-the-art segmentation methods.
Single-sensor color cameras primarily form images through a color filter array laid over the sensor. The acquired raw data represent a single color component per pixel and usually undergo demosaicking to form fully defined color images. This, however, produces artifacts that may affect the performance of low-level processing tasks applied to such estimated images. We instead propose to directly use raw data to estimate the image partial derivatives for edge detection. Considering luminance- and color-based approaches based on Deriche filters, we show that schemes using raw data may provide as accurate edge detection results as classical demosaicking-based ones at much reduced computational cost.
Texture description is a challenging problem with color images. Despite some attempts to include colors in local binary patterns (LBPs), no proposal has emerged as a color counterpart of grayscale LBPs. This is because colors are defined by vectors that are not naturally ordered and several ways exist to compare them. We propose an LBP extension that takes the vector information of color into account due to a color order. As several color orders are available and the selection of the most suitable one is difficult, we combine two of them in a texture descriptor called “mixed color order LBPs.” This small-size feature provides good performance on several benchmark databases for two classification problems with regard to larger-size LBP-based features of color textures.
We present the problem of setting up an intrinsic camera calibration under Scheimpflug condition for an industrial application. We aim to calibrate the Scheimpflug camera using a roughly hand positioned calibration pattern with bundle adjustment technique. The assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, we slightly modify pin-hole model to estimate the Scheimpflug angles. The results are tested on real data sets captured from cameras limited by various industrial constraints, and in the presence of large distortions.
We propose a color-image segmentation algorithm by unsupervised classification of pixels. The originality of the proposed approach consists in iteratively identifying pixel classes by taking into account both the pixel color distributions in several color spaces and their spatial arrangement in the image. In order to overcome the difficult problem of the color space choice, the algorithm selects the color space that is well suited to construct the class at each iteration step. The selection criterion is based on connectedness and color homogeneity measures of pixel subsets. In order to tune the sensitivity of segmentation, we introduce a hierarchical criterion that allows us to segment images with different numbers of regions as human observers do. Experiments carried out on the well-known Berkeley segmentation dataset show that this multicolor space approach succeeds in constructing classes that effectively correspond to regions in the image.
We present a line scan color vision system that detects aspect flaws occurring on the color surfaces of drinking glasses decorated due to an industrial silk-screen process. For this purpose, we have designed a specific image acquisition device based on a color line-scan camera. As the pattern printed on glasses slightly varies between two glasses successively produced, flaw detection by color image analysis is a challenging problem. In order to overcome this problem, the aspect flaws detection is based on an original color image segmentation scheme that iteratively constructs pixel classes. Since the results of segmentation scheme are known to depend on the choice of the color space, the main originality of our approach is to automatically determine the most discriminant color space for each class to be constructed. Experimental results show that the selection of the well-suited color spaces contributes to improve the segmentation accuracy and the aspect flaw detection.
This work presents a method which detects aspect flaws occurring on the color surfaces of drinking glasses decorated thanks
to an industrial silk-screen process. As the pattern printed on glasses slightly varies between two glasses successively
produced, a simple comparison between a reference image which represents a glass without any flaw and the current image
which contains the glass to be inspected, provides poor results for flaw detection. That's why we propose an original color
image segmentation scheme in order to compare the segmentation of the reference image and those of the current image to
be inspected. This procedure iteratively constructs the pixel classes by histogram multi-thresholding. For this purpose, the
most discriminating color spaces are automatically selected during an off-line supervised learning scheme so that the color
image segmentation is achieved by pixel classification.
Proc. SPIE. 6492, Human Vision and Electronic Imaging XII
KEYWORDS: Principal component analysis, Detection and tracking algorithms, Sensors, Databases, Image segmentation, Object recognition, Zoom lenses, Human vision and color perception, Electronic imaging, Prototyping
Color has been shown to be an important clue for object recognition and image indexing. We present a new
algorithm for color-based recognition of objects in cluttered scenes that also determines the 2D pose of each
object. As with so many other color-based object recognition algorithms, color histograms are also fundamental
to our new approach; however, we use histograms obtained from overlapping subwindows rather than the entire
image. An object from a database of prototypes is identified and located in an input image whenever there
are many good histogram matches between the respective subwindow histograms of the input image and the
image prototype from the database. In essence, local color histograms are the features to be matched. Once an
object's position in the image has been determined, its 2D pose is determined by approximating the geometrical
transformation most consistently mapping the locations of the prototype's subwindows to their matching
locations in the input image.
Soccer is a very popular sport all over the world, Coaches and sport commentators need accurate information about soccer games, especially about the players behavior. These information can be gathered by inspectors who watch the soccer match and report manually the actions of the players involved in the principal phases of the game. Generally, these inspectors focus their attention on the few players standing near the ball and don't report about the motion of all the other players. So it seems desirable to design a system which automatically tracks all the players in real- time. That's why we propose to automatically track each player through the successive color images of the sequences acquired by a fixed color camera. Each player which is present in the image, is modelized by an active contour model or snake. When, during the soccer match, a player is hidden by another, the snakes which track these two players merge. So, it becomes impossible to track the players, except if the snakes are interactively re-initialized. Fortunately, in most cases, the two players don't belong to the same team. That is why we present an algorithm which recognizes the teams of the players by pixels representing the soccer ground which must be withdrawn before considering the players themselves. To eliminate these pixels, the color characteristics of the ground are determined interactively. In a second step, dealing with windows containing only one player of one team, the color features which yield the best discrimination between the two teams are selected. Thanks to these color features, the pixels associated to the players of the two teams form two separated clusters into a color space. In fact, there are many color representation systems and it's interesting to evaluate the features which provide the best separation between the two classes of pixels according to the players soccer suit. Finally, the classification process for image segmentation is based on the three most discriminating color features which define the coordinates of each pixel in an 'hybrid color space.' Thanks to this hybrid color representation, each pixel can be assigned to one of the two classes by a minimum distance classification.
In this paper, we propose an algorithm which detects boundaries of objects from a color image. The result of this method is a binary image where the boundary points are only represented. A lot of methods of edge detection from color images have been developed before. One of the most efficient ones is based on vectorial computations of the tristimuli R, G, B. But, in the case of complex color images, it is difficult to automatically determine a global threshold, in order to find the boundary pixels. For this reason, we suggest a local thresholding algorithm, using co-operating relaxation process to enhance edge probabilities. The labeling relaxation algorithm processes probabilities which are the result of a gradient application to the different features of the color images. So, this algorithm is able to detect edges from a slope of intensity, saturation or hue. The relaxation algorithm analyzes four classes of pixels. Three classes represent a gradient filter output of the three color image features, R, G, B. At each pixel, the higher the response of a feature value, the higher the probability of the pixel to belong to the class corresponding to this feature. The last label represents the no edge pixel, whose probability to belong to it is computed with the probabilities of the three other ones. For each pixel, the sum of the four probabilities must be equal to 1. The process is iterated as many times as the probability of each pixel to belong to the no edge class is near 0 or 1. We consider that a pixel with a low probability to belong to no edge class represents an edge pixel. The efficacy of the relaxation algorithm depends on the choice of the compatibility coefficients. We propose to compute these coefficients with the initial probabilities of the pixels to belong to the four classes, by the evaluation of the neighboring mutual information between two classes. The compatibility coefficients definition is based on the mutual information of the classes at neighboring points. The presented method has been successfully tested on complex color images and compared with the classic edge detection methods. We show that this local segmentation method is better than a global one.
This paper describes a vision system designed for automatic inspection of galvanized metallic strips in real-time. First, we present the image acquisition system whose two main components are a linear camera and a specific lighting device. In the second part, an original procedure is proposed in order to segment the line images at the line image acquisition rate. It consists in a fast adaptive thresholding scheme which determines global thresholds for each line image. In order to achieve an exhaustive control of the whole production, these tasks are run in real-time on a specific hardware architecture. The prototype described in the last section of this paper has been integrated on a production line to evaluate its efficacy.