This paper presents two automatic methods for visual grading, deterministic and probabilistic, designed to solve the industrial problem of evaluation of seed lots from the characterization of a representative sample. The sample is thrown in bulk onto a tray placed in a chamber for acquiring color image in a controlled and reproducible manner. Two image-processing methods have been developed to separate and then characterize each seed present in the image. A shape learning is performed on isolated seeds. Collected information is used for the segmentation. The first approach adopted for the segmentation step is based on simple criteria such as regions, edges, and normals to the boundary. Marked point processes are used in the second approach, leading to tackling of the problem by a technique of energy minimization. In both approaches, an active contour with prior shape is performed to improve the results. A classification is done on shape or color descriptors to evaluate the quality of the sample.
This paper presents two automatic methods for visual grading, designed to solve the industrial problem of evaluation of seed lots from the characterization of a representative sample. The sample is thrown in bulk onto a tray placed in a chamber for acquiring color image in a controlled and reproducible manner. Two image processing methods have been developed to separate, and then characterize each seed present in the image. A shape learning is performed on isolated seeds. Collected information is used for the segmentation. The first approach adopted for the segmentation step is based on simple criteria such as regions, edges and normals to the boundary. Marked point processes are used in the second approach, leading to tackle the problem by a technique of energy minimization. In both approaches, an active contour with shape prior is performed to improve the results. A classification is done on shape or color descriptors to evaluate the quality of the sample.
Universities, Governmental administrations, photography agencies and many other companies or individuals need framework to manage their multimedia documents and the copyright or authenticity attached to their images. We purpose a web-based interface able to realize many operations: storage, image navigation, copyright insertion, authenticity verification. When a photography owner wants to store and to publish the document on the Internet, he will use the interface
to add his images and set the internet sharing rules. The user can choose for example watermarking method or resolution viewing. He set the parameters visually in way to consider the best ratio between quality and protection. We propose too an authenticity module which will allow online verification of documents. Any user on internet, knowing the key encoding, will be able to verify if an watermarked image have been altered or not. Finally, we will give some practical
examples of our system. In this study, we merge the last technology in image protection and navigation to offer a complete scheme able to manage the images published. It allows to use only one system to supply the security and the publication of their images.
During the last few years, image by content retrieval is the aim of
many studies. A lot of systems were introduced in order to achieve image indexation. One of the most common method is to compute a segmentation and to extract different parameters from regions. However, this segmentation step is based on low level knowledge, without taking into account simple perceptual aspects of images, like the blur. When a photographer decides to focus only on some objects in a scene, he certainly considers very differently these objects from the rest of the scene. It does not represent the same amount of information. The blurry regions may generally be considered as the context and not as the information container by image retrieval tools. Our idea is then to focus the comparison between images by restricting our study only on the non blurry regions, using then these meta data. Our aim is to introduce different features and a machine learning approach in order to reach blur identification in scene images.
The latest research projects in the laboratory LIGIV concerns capture,
processing, archiving and display of color images considering the
trichromatic nature of the Human Vision System (HSV). Among these
projects one addresses digital cinematographic film sequences of high
resolution and dynamic range. This project aims to optimize the use of
content for the post-production operators and for the end user. The
studies presented in this paper address the use of metadata to
optimise the consumption of video content on a device of user's choice
independent of the nature of the equipment that captured the
content. Optimising consumption includes enhancing the quality of
image reconstruction on a display. Another part of this project
addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Query by example is a common model developed for content-based image retrieval. The purpose of such a tool is to extract from a large database the most similar images to a request one. In practice, the meaningful characteristics of each image are first extracted. Then, each region is described with a vector composed with classical statistical features or spatial relationships. Finally, the system proposes to the user the images that minimize a certain similarity distance computed on each vector.
Nevertheless, query by example depends on a criterion determined by the user. Objectively, this last step of any content-based retrieval system then suffers from a large difficulty to express the real hope of the user. Thus, the results are always constrained to the similarity distance definition. In actual fact, it is not sufficient to compute good descriptors, a robust and adequate distance to compare them is also necessary.
Our purpose is more precisely to evaluate different similarity "blob-to-blob" distances. In fact, each image is first described locally using a coarse segmentation and the meaningful regions are extracted using a selection process based on color homogeneity. Among all these parameters, different distances are discussed using different approaches: spatial, shape, color and texture similarities.
The purpose of our visual information retrieval tool is to extract from a database images that are similar to an image query. Color features are generally used to define a measure of similarity between images, as they are usually very robust to noise, image degradation, changes in size, resolution or orientation. Nevertheless, the most often features suffer objectively from the lack of color spatial knowledge. Then, our purpose is to merge two classical methods : the color pyramid and the interest points detection, well-known for grey level image analysis. The pertinence of this new method is demonstrated by an evaluation and a comparison with others keypoints detectors. We show the interest for image indexation with concrete tests on our large images database, using the icobra system.
Technological advances provide now the opportunity to automate the pavement distress assessment. This paper deals with an approach for achieving an automatic vision system for road surface classification. Road surfaces are composed of aggregates, which have a particular grain size distribution and a mortar matrix. From various physical properties and visual aspects, four road families are generated. We present here a tool using a pyramidal process with the assumption that regions or objects in an image rise up because of their uniform texture. Note that the aim is not to compute another statistical parameter but to include usual criteria in our method. In fact, the road surface classification uses a multiresolution cooccurrence matrix and a hierarchical process through an original intensity pyramid, where a father pixel takes the minimum gray level value of its directly linked children pixels. More precisely, only matrix diagonal is taken into account and analyzed along the pyramidal structure, which allows the classification to be made.
The measurement of perceptual similarities between textures is a difficult problem in applications such as image classification and image retrieval in large databases. Among the various texture analysis methods or models developed over the years, those based on a multi-scale multi- orientation paradigm seem to give more reliable results with respect to human visual judgement. This paper describes new texture features extracted from an overcomplete wavelet transform called a `steerable pyramid' which models human early vision. The textured image is decomposed into a 3- level pyramid using a 4-orientation band filter set, the texture features are computed from the distributions associated with each filter as follows: we construct the `cumulative distribution function' (cdf) of graylevels from the 12 band-pass images and we fit them with Bezier curves in order to characterize the texture. The clusters of the Bezier control points from the 12 cdf allow us to discriminate the textures. We apply these new texture features to the search through an image database to find the most `similar' textures to a selected one.
Proc. SPIE. 3101, New Image Processing Techniques and Applications: Algorithms, Methods, and Components II
KEYWORDS: Visual process modeling, Databases, Image segmentation, Matrices, Image processing, Feature extraction, Human vision and color perception, Image retrieval, Image classification, Thin film coatings
This paper presents a hierarchical method for segmenting multi-textured images. Using a pyramidal construction, texture features are extracted from each level: one based on the presence of contours and the other on a special co- occurrence matrix. Actually, this matrix makes an inventory of the occurrences son-father between two consecutive levels through the pyramid. This model is furthermore adapted for texture classification, using the intrinsic directions of the overlapping pyramidal structure. With this multiresolution scheme, the human vision is simulated in its attention focusing processes, via an individual and a contextual analysis of each textured region.
There is a lack of general models in image processing, particularly for image segmentation. Actually, each treatment must combine several classical features, such as grey level values, neighborhood and spatial distribution. In fact, an image can be studied both as an aspect of geometry and as an aspect of combinatorics. Here, to each digital image we associate a neighborhood hypergraph. This general model is in fact clearly adapted to include grey level and neighborhood informations, and particularly for image segmentation. Moreover, the pyramid constitutes an efficient tool in image analysis, simulating the human vision in its attention focusing, through an individual and a contextual analysis of each region. This multiresolution scheme allows simultaneously relevant regions detection and detailed delineation. Then, combining the two approaches, a hypergraph segmentation is associated at each level of the pyramid. Finally, we use the evolution of this pyramid of hypergraphs for image segmentation and more generally modelization.
This paper presents a general method for achieving an automatic vision system for the quantification of visual aspects in the textile field. The process begins in fact with the decomposition of the image into a structure and a texture image. This operation is achieved by filtering in the Fourier domain, following an additive model of decomposition with disconnected masks. Some new quantifiers are then computed for texture images and a segmentation is only done if necessary. A new method is introduced as well by using localized pyramids, called local pyramids, centered on each of the relevant parts and no more critical for particularly elongated objects. The results are also efficient on more than 200 images, where the automatic conation is in accordance with the expert evaluations.