Light is diffracted when passing through a perforated object, resulting in a spatial distribution of light intensity that depends on the size and form of the aperture and the wavelength of light. Analysis of this diffraction can be used to estimate the inclination of the surface. However, because of distortions in the image acquisition process, these estimations are not very precise when using standard CCD- cameras. In order to increase accuracy and solve ambiguities, polarization analysis is added as an additional source of information. Polarized light is used as source of illumination. Due to interaction with the matter of the aperture the light is partly depolarized, hence we get a spacial distribution of the degree of polarization. The measurements of the degree of polarization are not based on absolute light intensity and thus much more robust. For irregularly shaped objects with a large number of apertures methods for shape estimation are presented. Reasonable assumptions of the surface structure are introduced to reduce the number of degrees of freedom. Possible applications of the proposed method are discussed and we show relations of optical and mechanical quantities such as traction.
In computer graphics, a complete knowledge of the interactions between light and a material is essential to obtain photorealistic pictures. Physical measurements allow us to obtain data on the material response, but are limited to industrial surfaces and depend on measure conditions. Analytic models do exist, but they are often inadequate for common use: the empiric ones are too simple to be realistic, and the physically-based ones are often to complex or too specialized to be generally useful. Therefore, we have developed a multiresolution virtual material model, that not only describes the surface of a material, but also its internal structure thanks to distribution functions of microelements, arranged in layers. Each microelement possesses its own response to an incident light, from an elementary reflection to a complex response provided by its inner structure, taking into account geometry, energy, polarization, . . ., of each light ray. This model is virtually illuminated, in order to compute its response to an incident radiance. This directional response is stored in a compressed data structure using spherical wavelets, and is destined to be used in a rendering model such as directional radiosity.
This paper presents a method for deducing both, the 3D orientation of a flat, rough surface and the 3D position of the light source by analyzing the specular reflection produced by the light source on the surface. This is achieved by polarization analysis of the reflected light from a single point of view. This new approach is applicable to all materials and to all isotropic surface structures excluding mirror like and ideal Lambertian surfaces; therefore, it can be applied in most cases of practical interest. First the paper shows that important 3D information of the position of the light source can be inferred by polarization analysis. Second we present a new method for the calculation of surface orientation which can be deduced from the intensity image of a reflected point light source. Furthermore the paper shows the benefit to be derived from the combination of the two former results. In the resulting algorithm the complete 3D information of the light source and the reflecting surface can be inferred. The applicability under different lighting conditions is demonstrated. In contrast to previous results our method does not need any calibration of the experimental setup.
In automated visual surface inspection based on statistical pattern recognition, the collection of training material for setting up the classifier may appear to be difficult. Getting a representative set of labeled training samples requires scanning through large amounts of image material by the training personnel, which is an error prone and laborious task. Problems are further caused by the variations of the inspected materials and imaging conditions, especially with color imaging. Approaches based on adaptive defect detection and robust features may appear inapplicable because of losing some faint or large area defects. Adjusting the classifier to adapt to the changed situation may appear difficult because of the inflexibility of the classifiers' implementations. This may lead to impractical often repeated training material collection and classifier retraining cycles. In this paper we propose a non-segmenting defect detection technique combined with a self-organizing map (SOM) based classifier and user interface. The purpose is to avoid the problems with adaptive detection techniques, and to provide an intuitive user interface for classification, helping in training material collection and labelling, and with a possibility of easily adjusting the class boundaries. The approach is illustrated with examples from wood surface inspection.
The results of the color evaluation for some monocolor, bicolor and black-white liquid crystal displays utilizing guest-host effect are presented. The polarized absorption spectra for the dichroic dyes dissolved in the liquid crystalline mixture E18 (BDH Ltd.) have been recorded and the order parameter has been calculated. Moreover, the tristimulus values of the displays color in OFF and ON states have been calculated and the changes of the color parameters under the electric field applied have been determined by using 1976 CIELAB Color System. The results have been compared with those obtained for twisted nematic liquid crystal display.
The living softwood tree forms compression wood to compensate for external loads during growth, which creates wood fibers with higher longitudinal shrinking and swelling than normal wood at moisture content changes. This is often the cause of undesirable warping of sawn wood products after drying. An automatic detection of severe compression wood is thus useful to reject unwanted pieces. Detection in green condition is often preferred in a sawmill while detection in dry condition is needed in other applications. Three different non- destructive scanning methods were evaluated on both green and dry wood surfaces. The methods used were RGB (red, green, blue) color scanning, tracheid-effect scanning and x-ray scanning. The color and x-ray methods were evaluated on Southern yellow pine lumber, while the tracheid-effect scanning was tested on Norway spruce. For scanning in green condition detection of compression wood was good using the tracheid-effect and color scanning. X-ray scanning was not useful because of the uneven moisture distribution in green lumber. After drying the result changes, tracheid-effect and x-ray scanning have good detection ability while RGB color does not provide sufficient information for reliable detection.
The purpose of this paper is to investigate if it is possible to use estimation techniques to reduce the difference between predicted and actual RGB values. Images and spectral reflectances of two classes of objects were used: matte, 2D (Munsell chips and Macbeth chart) and natural, 3D objects (faces). In the prediction phase, a simple RGB model was evaluated which takes into account only the spectral power distributions of the current and calibration illuminants, spectral reflectances of the objects, and the spectral response of the RGB camera in the calculations to avoid the complexity of modeling other possible factors affecting image formation. The results show that an estimation can make the prediction results closer to the actual values.
The paper reviews the state-of-the-art color measuring systems used for the control of newspaper printing. The printing process requirements are specified and different off-line and on-line color quality control systems, commercially available and under development, are evaluated. Recent market trends in newspaper printing are discussed based on the survey. The study was made on information derived from: conference proceedings (TAGA, IARIGAI, SPIE and IS&T), journals (American Printer, Applied Optics), discussions with experts (GMI, QTI, HONEYWELL, TOBIAS, GretagMacbeth), IFRA Expo'98/Quality Measuring Technologies, commercial brochures, and the Internet. On the background of this review, three different measuring principles, currently, under investigation at VTT Information Technology, are described and their applicability to newspaper printing is evaluated.
Monitoring color in the production line requires to remotely observe moving and not-aligned objects with in general complex surface features: multicolored, textured, non-flat, showing highlights and shadows. We discuss the use of color cameras and associated color image processing technologies for what we call 'imaging colorimetry.' This is a 2-step procedure which first uses color for segmentation and for finding Regions-of- Interest on the moving objects and then uses cluster-based color image processing for computing color deviations relative to previously trained references. This colorimetry is much more a measurement of aesthetic consistency of the visual appearance of a product then the traditional measurement of a more physically defined mean color vector difference. We show how traditional non-imaging colorimetry looses most of this aesthetic information due to the computation of a mean color vector or mean color vector difference, by averaging over the sensor's field-of-view. A large number of industrial applications are presented where complex inspection tasks have been solved based on this approach. The expansion to a higher feature space dimensions based on the 'multisensorial camera' concept gives an outlook to future developments.
Object counting in the scene can be used for visual inspection processes. Highlights are the characteristic bright spots occurring on the surfaces of individual objects. In this work the importance of highlights in image processing is described and some reflection models are briefly reviewed. First of all the possibilities of the Dichromatic Reflection Model (DRM) are presented. The paper presents a new idea of object counting based on highlights counting on surfaces of objects. Object counting is composed of following stages: extraction of highlights in color image by thresholding of selected IHS components, morphological consolidation of extracted highlight regions and region counting (labeling) in binary image. Object counting takes into consideration the number of used light sources, because in case of more than one lighting source multiple highlights per object are observed. This is realized without using reflection model. The proposed method was tested for a number of different, real world images. Input images were acquired directly from 1-CCD color camera without preprocessing. Best results were achieved for optically inhomogeneous (e.g. plastic) chromatic objects in dark background. Typical lighting system based on two fluorescent tubes (5400 K) was used. The method seems promising for practical applications.
A novel sensor concept for the detection of the fundamental components of visible light has been developed. The multi- channel sensors (3-, 4- and 6-channel detectors) based on three and four stacked amorphous thin film detectors are color moire or color aliasing free due to their vertical integration. The color separation is performed in the depth of the structure without using optical filters. The developed 3- and 4-channel detectors can be read-out with one shot whereas the color information of the 6-channel detector can be read- out with two shots. The sensors are colorimetrically characterized in order to gain further optimization criteria concerning the improvement of the sensor performance. The presented characterization model facilitates the quantification of color errors with regard to the human perception. Furthermore, the color errors of amorphous thin film sensors are compared with a commercial color CCD camera and a BiCMOS color-sensor.
Visual quality control is an important application area of machine vision. In ceramics industry, it is essential that in each set of ceramic tiles every single tile looks similar, while considering e.g. color and texture. Our goal is to design a machine vision system that can estimate the sufficient similarity or same appearance to the human eye. Currently, the estimation is usually done by human vision. Our main approach is to use accurate spectral representation of color, and compare spectral features to the RGB color features. The authors have recently proposed preliminary methods and results for the classification of color features. In this paper the approach is developed further to cope with illumination effects and to take more advantage of spectral features more. Experiments with five classes of brown tiles are discussed. Besides the k-NN classifier, a neural network, called the Self-Organizing Map (SOM) is used for understanding spectral features. Every single spectrum in each tile is used as input to a 2-D SOM with 30 X 30 nodes or neurons. The SOM is analyzed in order to understand how spectra are clustered. As a result, the nodes are labeled according to the classes. Another interest is to know whether we can find the order of spectral colors. In our approach, all spectra are clustered by 32 nodes in a 1-D SOM, and each pixel (spectrum) is presented by pseudocolors according to the trained nodes. Thus, each node corresponds to one pseudocolor and every spectrum is mapped into one of these nodes. Finally, the results are compared to experiments with human vision.
Flotation is the most common industrial method by which valuable minerals are separated from waste rock after crushing and grinding the ore. For process control, flotation plants and devices are equipped with conventional and specialized sensors. However, certain variables are left to the visual observation of the operator, such as the color of the froth and the size of the bubbles in the froth. The ChaCo-Project (EU-Project 24931) was launched in November 1997. In this project a measuring station was built at the Pyhasalmi flotation plant. The system includes an RGB camera and a spectral color measuring instrument for the color inspection of the flotation. The RGB camera or visible spectral range is also measured to compare the operators' comments on the color of the froth relating to the sphalerite concentration and the process balance. Different dried mineral (sphalerite) ratios were studied with iron pyrite to find out about the minerals' typical spectral features. The correlation between sphalerite spectral reflectance and sphalerite concentration over various wavelengths are used to select the proper camera system with filters or to compare the results with the color information from the RGB camera. Various machine vision candidate techniques are discussed for this application and the preprocessed information of the dried mineral colors is used and adapted to the online measuring station. Moving froth bubbles produce total reflections, disturbing the color information. Polarization filters are used and the results are reported. Also the reflectance outside the visible light is studied and reported.
An autonomous approach for learning the colors of specific objects assumed to have known body spectral reflectances is developed for daylight illumination conditions. The main issue is to be able to find these objects autonomously in a set of training images captured under a wide variety of daylight illumination conditions, and to extract their colors to determine color space regions that are representative of the objects' colors and their variations. The work begins by modeling color formation under daylight using the color formation equations and the semi-empirical model of Judd, MacAdam and Wyszecki (CIE daylight model) for representing the typical spectral distributions of daylight. This results in color space regions that serve as prior information in the initial phase of learning which consists in detecting small reliable clusters of pixels having the appropriate colors. These clusters are then expanded by a region growing technique using broader color space regions than those predicted by the model. This is to detect objects in a way that is able to account for color variations which the model cannot due to its limitations. Validation on the detected objects is performed to filter out those that are not of interest and to eliminate unreliable pixel color values extracted from the remaining ones. Detection results using the color space regions determined from color values obtained by this procedure are discussed.
Color processing methods can be divided into methods based on human color vision and spectral based methods. Human vision based methods usually describe color with three parameters which are easy to interpret since they model familiar color perception processes. They share however the limitations of human color vision such as metamerism. Spectral based methods describe colors by their underlying spectra and thus do not involve human color perception. They are often used in industrial inspection and remote sensing. Most of the spectral methods employ a low dimensional (three to ten) representation of the spectra obtained from an orthogonal (usually eigenvector) expansion. While the spectral methods have solid theoretical foundation, the results obtained are often difficult to interpret. In this paper we show that for a large family of spectra the space of eigenvector coefficients has a natural cone structure. Thus we can define a natural, hyperbolic coordinate system whose coordinates are closely related to intensity, saturation and hue. The relation between the hyperbolic coordinate system and the perceptually uniform Lab color space is also shown. Defining a Fourier transform in the hyperbolic space can have applications in pattern recognition problems.
Saturation here refers to electronic saturation of the camera sensors which produces clipped colors, and not the purity of color as in the hue-saturation and value scale. Saturated images are routinely discarded in image analysis yet there are situations when they cannot be avoided. This paper proposes two strategies to recover color information in facial images taken under non-ideal conditions to make them useful for further processing. The first assumes that the skin is matte and that there are parts of the image which are not clipped. Ratios between R, G and B values of unclipped pixels belonging to the same parts of the image may then be used to compute for lost channel values. The second approach uses color eigenfaces computed from our physics-based face database obtained under different illuminants and camera calibration conditions. Skin color is recovered by transforming the first few eigenface coefficients towards ideal condition values. Excellent color recovery for clipped images is achieved when these two techniques are combined and used on face images captured under daylight illuminant with a camera white balanced for incandescent light.
In this paper we present the results of a preliminary computer vision system to classify the production of a ceramic tile industry. We focus on the classification of a specific type of tiles whose production can be affected by external factors, such as humidity, temperature, origin of clays and pigments. Variations on these uncontrolled factors provoke small differences in the color and the texture of the tiles that force to classify all the production. A constant and non- subjective classification would allow avoiding devolution from customers and unnecessary stock fragmentation. The aim of this work is to simulate the human behavior on this classification task by extracting a set of features from tile images. These features are induced by definitions from experts. To compute them we need to mix color and texture information and to define global and local measures. In this work, we do not seek a general texture-color representation, we only deal with textures formed by non-oriented colored-blobs randomly distributed. New samples are classified using Discriminant Analysis functions derived from known class tile samples. The last part of the paper is devoted to explain the correction of acquired images in order to avoid time and geometry illumination changes.
An image segmentation method based on the dichromatic reflection model, is introduced. To adapt to changing illumination conditions the image formation process is modelled by the camera characteristics, the reflectance of the object of interest, and the CIE daylight standard. A priori, loci for the body and surface reflection for the object of interest is modeled according to changes of the illumination by CIE daylight standard. That two loci is approximated by two lines and the plane defined by these is used initially for segmentation. In the case of two objects, the image is segmented by the plane which is rotated about the surface locus to minimize Wilks (lambda) . The method is used for segmenting four images ranging in correlated color temperature from 5200 K to 11500 K. To assess its performance the four images were manually segmented into three classes: vegetation, background, and an uncertain class. The method adapted to the changing light condition with total errors ranging from 3% to 12% and higher error rates being in the images with the largest uncertain group. The method was also compared with Bayes minimax criteria for finding the 'best' rotation from which it deviated by only 0.8% on average.
Wood Technology research and education at Lulea University of Technology is located in Skelleftea 800 km north of Stockholm. At the campus about 25 persons are involved in education and research in Wood Technology. We are educating M.Sc. and post- graduate students in Wood Technology. The research at the campus includes the following main fields: -- Wood Machining - - Woodmetrics -- Wood Drying -- Wood Composites/Wood Material Science. Our research strategy is to obtain an individual treatment of every tree, board and piece of wood in order to get highest possible value for the forest products. This shall be accomplished by the aid of advanced scanning technology and computer technology. Woodmetrics means to measure different wood parameters in order to optimize the utilization of the raw material. Today we have the following projects in this field: Automatic wood inspection -- Color changes and moisture flow in drying processes -- Inner quality of logs and lumber - - Stem quality database -- Computer tomography -- Aesthetic properties of wood -- Market/industry/forest relations. In the Woodmetrics field we are using computer tomography, CCD cameras and other sensors in order to find and measure defects in trees and on boards. The signals are analyzed and classified with modern image analyzing techniques and advanced statistical methods.
This paper describes an approach for recognizing naturally textured objects using color images. Natural objects, such as finished wood, yield images that are inherently difficult to analyze because large variations in visual appearance are common. In the application of interest here, traditional texture- and color-based techniques yielded poor results in our early experiments. However, we found that classification accuracy improved dramatically when a nonuniform quantization of the color space was chosen adaptively, using a set of training images. Ultimately, we developed a novel method for selecting a nonuniform partition of the color space so that differences between object classes are accentuated. The resulting partition serves as the domain for histograms of models and of observed images, and an information-theoretic similarity measure is used to perform recognition. The motivation for this system is to achieve high recognition accuracy in an industrial setting. Laboratory tests have demonstrated a high level of accuracy for this technique, even though the objects of interest exhibit large variations of texture and color.
A real-time color quality control metric for planar surfaces has been developed. This is a differencing method that compares the color histogram of a test object with that of a standard obtained off-line. To reduce computational effort, three 1D histograms are formed by projecting the reference color histogram on to its principal axes. A metric value for each of the axes is then obtained by taking the RMS value of the differences between the corresponding entries in the histograms of the sample and reference objects. A model of the behavior of the metric has been developed and compared to the practical case where one of the color channels is attenuated. The major axis is generally observed to form the least sensitive color metric component for the purposes of quality control. It is argued that the projections on to the minor axis are theoretically expected to produce the most sensitive component because this is a null channel with respect to the reference image. In practice it is observed that this is often the case. The method has been evaluated using production samples of ceramic tiles. Results are presented showing clustering of the experimental data corresponding to tiles of different grades.
A hybrid adaptive system incorporating linear regression and neural network has been developed for the correction of color measuring errors. The linear regression model is used to correct systematic errors while the neural network is used to correct the residue errors that the linear regression method is unable to remove. We use standard color materials from the National Physical Laboratory (NPL) as training samples and test the method using a variety of colors outside the training set. Experimental results are presented which show promising future of neural networks in color measuring industries.
Colors recorded in an image depend on the color of the capture illuminant. As such image colors are not stable features for object recognition but we wish they were stable since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Color constancy algorithms attempt to infer and remove the illuminant color through image analysis. Over the last two decades, various models for color constancy have been developed. Unfortunately, color constancy algorithms are still not good enough to support object recognition. In this paper, we evaluate optimal color constancy procedures against color normalization. Two perfect color constancy algorithms are described. One is perfect color constancy by the scene, which arrives at an estimate of the illuminant not through algorithmic inference, but through measurement: the light source is measured using a spectraradiometer, assuming the reflectances of object surface are known. The other is perfect color color constancy by the illuminant, which arrives at an estimate of the illuminant through measurement, assuming the reflectances of object surface are unknown. Instead of color constancy, color normalization normalizes color images in terms of the context to remove illumination. To remove dependency due to illumination, images in a calibrated dataset are preprocessed using either the color constancy or color invariant normalization. Two experiments are reported in the paper. In the first experiment, the optimal algorithms of perfect color constancy based on measurement were tested using a calibrated image dataset. In the second experiment, the performances of the optimal color constancy algorithms are compared with color invariant normalization. Unfortunately, measurement driven color constancy by the illuminant does not support perfect recognition. However, color constancy preprocessing based on a scene dependent 'effective illuminant' facilitates near-perfect recognition. In comparison the color invariant normalization also deliver near-perfect recognition. The failure of color constancy by the illuminant is understandable because the measured illuminant doesn't correspond to the actual effective illuminant. Rather, we found illumination to depend both on the light source and characteristics of the scene.
The problem of color constancy for discounting illumination color to obtain the apparent color of the object has been the topic of much research in computer vision. By assuming the neutral interface reflection and dichromatic reflection with highlights (i.e. highlights have the same color as the illuminant) various methods have been proposed aiming at recovering the illuminant color from color highlight analysis. In general, these methods are based on three color stimuli to approximate color. In this contribution, we estimate the spectral distribution from surface reflection using spectral information obtained by a spectrograph. The imaging spectrograph provides a spectral range at each pixel covering the visible wavelength range. Our method differ from existing methods by using a robust clustering technique to obtain the body and surface components in a multi-spectral space. These components determine the direction of the illumination spectral color. Then, we recover the illumination spectral power distribution by using principal component analysis for all wavelengths. To obtain the most reliable estimate of the spectral power distribution of the illuminant, all possible combinations of wavelengths are used to generate the optimal averaged estimation of the spectral power distribution of the scene illuminant. Our method is restricted to images containing a substantial amount of body reflection and highlights.
We present a polarization measuring system for industrial applications. The system consists of a polarization state generator and a polarization sensitive detector consisting of a rotating waveplate, a polarizer, and a detector. A fast Fourier algorithm provides the four Stokes parameters of the light where all parameters are calculated independently. By tuning the polarization of the illuminating light we calculate the Jones matrix and the birefringence of the sample. A high accuracy is achieved by considering the fact that the transmission of the rotating waveplate in the detector depends on the polarization of the studied light. The system delivers the birefringence, the azimuth of the fast axis, the Jones matrix, and the eigenpolarizations of the sample under test. Under industrial conditions we found an accuracy of the stress birefringence of 0.1 nm and an accuracy of the azimuth of the fast axis of 0.1 degree.
We present an improved method for material classification by measurement of the full Stokes vector of a reflectance image. This approach is based on the observation that linearly polarized light specularly reflected from a dielectric remains linearly polarized but becomes highly unpolarized in off- specular directions. For a metal, the same incident light becomes elliptically polarized on reflection in both the macroscopic specular and non-specular directions except when the orientation of polarization of the incident light is perpendicular to or parallel to the plane of incidence. The novelty and value in comparison with previous approaches is that it removes a number of constraints on system and scene geometry.
The angle between the optical axis of a nematic liquid crystal and the inner surface of the cell strongly influences the electro-optical performances of the liquid crystal display devices. We report an accurate method to quickly determine the tilt bias angle of an anti-parallel liquid crystal cell using our spatial photometer EZContrast. The measurement system provides instant observation at an infinite distance, without mechanical scanning, of the interference figure obtained with the sample in a plus or minus 80 degree measurement angle. The tilt bias angle and the cell gap are deduced by fitting the intensity extrema of the figure. Thanks to the large set of data, it is also possible to know the error due to misalignment between the cell normal and the EZContrast axis and, consequently, this method is not influenced by the operator skill.
This paper describes the characterization of the polarization modulation produced by a commercially manufactured liquid crystal television from which the polarizers have been removed. Experimental results are compared with a Jones matrix model of the display which has been developed by researchers at UCL. Experimental analysis shows that the behavior of the device agrees with prediction, but deviates quantitatively for certain input polarizations. An algorithm has been developed to determine an unknown input polarization from intensity measurements taken through a fixed analyzer, with the display at several different applied voltages. The commercially produced display, apart from its potentially lower cost, due to mass production, has the advantage that it has a large number of pixels. This allows selective control or measurement of the polarization across the two dimensional input field of view.
Among properties computer vision attempts to extract from images are: local shape, intrinsic reflection parameters, and roughness because they are all crucial to the success of image interpretation in general and to realistic surface modeling in particular. We seek light reflection phenomena and reflectance models that could better capture the relation of reflected light parameters to the local properties of the observed surface. In addition to the relation intensity -- shape studied in computer vision for a long time, we study the relation of other light characteristics to local shape. We started by focusing on light polarization state. We found that the generalization of the Lambert's reflectance model based on Fresnel coefficients, as proposed by Wolff, correctly predicts the polarization state of light reflected on smooth dielectrics. Both the light incidence and the viewer plane orientations may be measured to estimate the surface normal if light position relative to the observer is known. It is demonstrated that inter-reflections between surfaces play an important role, especially near the shadow boundary where the body reflection component is very weak.
Results of the studies on polarimetric fiber optic sensing based on polarization effects in fiber optic smart structures are presented. The smart structures consisted of highly birefringent (Hi-Bi) bow-tie and side-hole fibers embedded in a specially prepared cylindrical epoxy cylinder. The applied procedure had an objective to investigate and compare the influence of the external structure on optical properties of different types of Hi-Bi fibers. The Hi-Bi fiber-based structure has been subjected to selected deformation effects mostly induced by hydrostatic pressure (up to 300 MPa.) and temperature, whereas polarization properties of the transmitted optical signal have been investigated.