The observed color of an object is influenced by the spectral distribution of an illuminant impinging upon it. Here we explored a method to obtain optimal illumination spectra for local contrast enhancement based on human vision. First, multispectral imaging was used to measure the spectral reflectance of the sample and color segmentation was used to extract its color features. Then we obtained the target-specific optimal illumination by maximizing the color differences of mutual colors in our sample tissue. To verify the effectiveness of this method, simulated images under the optimized illumination were compared to illumination with the standard illuminant D65 and a cool white light-emitting diode (5500 K). Results showed that the sample under the optimized illumination had a better perceptual color contrast.
Colour-difference formulas are tools employed in colour industries for objective pass/fail decisions of manufactured products. These objective decisions are based on instrumental colour measurements which must reliably predict the subjective colour-difference evaluations performed by observers’ panels. In a previous paper we have tested the performance of different colour-difference formulas using the datasets employed at the development of the last CIErecommended colour-difference formula CIEDE2000, and we found that the AUDI2000 colour-difference formula for solid (homogeneous) colours performed reasonably well, despite the colour pairs in these datasets were not similar to those typically employed in the automotive industry (CIE Publication x038:2013, 465-469). Here we have tested again AUDI2000 together with 11 advanced colour-difference formulas (CIELUV, CIELAB, CMC, BFD, CIE94, CIEDE2000, CAM02-UCS, CAM02-SCD, DIN99d, DIN99b, OSA-GP-Euclidean) for three visual datasets we may consider particularly useful to the automotive industry because of different reasons: 1) 828 metallic colour pairs used to develop the highly reliable RIT-DuPont dataset (Color Res. Appl. 35, 274-283, 2010); 2) printed samples conforming 893 colour pairs with threshold colour differences (J. Opt. Soc. Am. A 29, 883-891, 2012); 3) 150 colour pairs in a tolerance dataset proposed by AUDI. To measure the relative merits of the different tested colour-difference formulas, we employed the STRESS index (J. Opt. Soc. Am. A 24, 1823-1829, 2007), assuming a 95% confidence level. For datasets 1) and 2), AUDI2000 was in the group of the best colour-difference formulas with no significant differences with respect to CIE94, CIEDE2000, CAM02-UCS, DIN99b and DIN99d formulas. For dataset 3) AUDI2000 provided the best results, being statistically significantly better than all other tested colour-difference formulas.
Colors of skin, green plant, and blue sky of digital photographic images were studied for modeling and detection of these
three important memory color regions. The color modeling of these three regions in CIELAB and CAM02-UCS was
presented, and the properties of these three color groups were investigated.
The image quality of two active matrix organic light emitting diode (AMOLED) smart-phone displays and two in-plane switching (IPS) ones was visually assessed at two levels of ambient lighting conditions corresponding to indoor and outdoor applications, respectively. Naturalness, colorfulness, brightness, contrast, sharpness, and overall image quality were evaluated via psychophysical experiment by categorical judgment method using test images selected from different application categories. The experimental results show that the AMOLED displays perform better on colorfulness because of their wide color gamut, while the high pixel resolution and high peak luminance of the IPS panels help the perception of brightness, contrast, and sharpness. Further statistical analysis of ANOVA indicates that ambient lighting levels have significant influences on the attributes of brightness and contrast.
Skin tone is the most important color category in memory colors. Reproducing it pleasingly is an important factor in
photographic color reproduction. Moving skin colors toward their preferred skin color center improves the skin color
preference on photographic color reproduction. Two key factors to successfully enhance skin colors are: a method to
detect original skin colors effectively even if they are shifted far away from the regular skin color region, and a method
to morph skin colors toward a preferred skin color region properly without introducing artifacts. A method for skin
color enhancement presented by the authors in the same conference last year applies a static skin color model for skin
color detection, which may miss to detect skin colors that are far away from regular skin tones. In this paper, a new
method using the combination of face detection and statistical skin color modeling is proposed to effectively detect skin
pixels and to enhance skin colors more effectively.
A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were
prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of
perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual
color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been
changed but the relative scales between pairs in the data are preserved.
Skin tones are the most important colors among the memory color category. Reproducing skin colors pleasingly is an
important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center
improves the color preference of skin color reproduction. Several methods to morph skin colors to a smaller preferred
skin color region has been reported in the past. In this paper, a new approach is proposed to further improve the result of
skin color enhancement. An ellipsoid skin color model is applied to compute skin color probabilities for skin color
detection and to determine a weight for skin color adjustment. Preferred skin color centers determined through
psychophysical experiments were applied for color adjustment. Preferred skin color centers for dark, medium, and light
skin colors are applied to adjust skin colors differently. Skin colors are morphed toward their preferred color centers. A
special processing is applied to avoid contrast loss in highlight. A 3-D interpolation method is applied to fix a potential
contouring problem and to improve color processing efficiency. An psychophysical experiment validates that the
method of preferred skin color enhancement effectively identifies skin colors, improves the skin color preference, and
does not objectionably affect preferred skin colors in original images.
Colour preference adjustment is an essential step for colour image enhancement and perceptual gamut mapping. In
colour reproduction for pictorial images, properly shifting colours away from their colorimetric originals may produce
more preferred colour reproduction result. Memory colours, as a portion of the colour regions for colour preference
adjustment, are especially important for preference colour reproduction. Identifying memory colours or modelling the
memory colour region is a basic step to study preferred memory colour enhancement. In this study, we first created
gamut for each memory colour region represented as a convex hull, and then used the convex hull to guide mathematical
modelling to formulate the colour region for colour enhancement.
A series of psychophysical experiments using categorical judgment method was carried out to develop a colour
naturalness metric (CNM) for evaluating image quality of mobile displays. These experiments included colour
naturalness judgment and image-quality difference judgment. Through the former one, CNMs were trained and the latter
experiment tested the metrics. Two types of CNMs were newly proposed: nonlinearly decaying CNM and linearly
decaying CNM. In the CNMs, it was assumed that one familiar object in an image played a critical role to judge the
colour naturalness of the whole image. Through a performance comparison between objects' models, one critical object
in a scene was selected and with the critical object's model, the whole scene's colour naturalness was predicted.
This paper describes an investigation into the effect of a wide range of surround conditions on the colour appearance of
test colours on a 42" plasma display panel. Experiments were conducted using surrounds including dark, indoor and
outdoor conditions. Additionally the stimulus size was changed by controlling the viewing distance. The viewing
conditions studied were two bright, two average, two dim and two dark surrounds. Each of the test colours was assessed
by 10 observers using a magnitude estimation method. These surrounds were divided into two categories. In the first
category, the surround had no effect on the displayed colours, but observers could still sense the different brightness
levels of the surround. In the second category, surround introduced flare to the displayed colours together. For the first
category, little visual lightness difference was shown between bright and dark, and dim and dark surround, unlike the
expectation that the perceived lightness contrast may increase as the surround becomes brighter. The lightness
dependency of colourfulness, however, was found to change. For the second category, the visual colour appearance of
the surround conditions was plotted against measured data, CIELAB L*, C* values, to try to understand the surround
effect. As the surround became brighter, the perceived dynamic range of visual lightness decreased, and the perceived
colourfulness increased, more obviously in high chroma colours. In the investigation of the change of stimulus size
under different surround conditions, visual colour appearance was not affected by the stimulus sizes of 2<sup>o</sup> and 0.6<sup>o</sup> in the
dark surround. However, the difference was found in the very dark colours with a dim surround. Finally, all of visual
colour appearance data were used to test the performance of the colour appearance model CIECAM02. Minor
modification was accomplished to improve the colourfulness predictor, especially for the black background.
A series of psychophysical experiments using paired comparison method was performed to investigate various visual
attribute affecting image quality of a mobile display. An image quality difference model was developed to show high
correlation with visual results. The result showed that Naturalness and Clearness are the most significant attributes
among the perceptions. A colour quality difference model based on image statistics was also constructed and it was
found colour difference and colour naturalness are important attributes for predicting image colour quality difference.
In industrial practice, it is often required that weighting tables were prepared in advance and tristimulus values can then be directly computed using summation of the products of the weights and measured reflectance values. The CIE has never provided precise procedure to calculate the weighting tables, and various discrepant methods have been used. Hence it is possible to obtain significantly different tristimulus values from the same set of spectral data. In order to overcome this problem, the American Society for Testing and Materials (ASTM Intl.) has published two sets of weighting tables known as Table 5 and Table 6 respectively. Each set includes 36 weighting tables covering 9 illuminants and two standard colorimetric observers at two wavelength intervals (10-nm and 20-nm). The weighting tables of Table 5 must be used with the reflectance corrected using the Stearns and Stearns (SS) method, and weighting tables of Table 6 must be used with the measured reflectance values without the SS correction. In practice, the illuminant used may be different from the CIE standard illuminants and users have to prepare their own weighting tables corresponding to the illuminant actually used. ASTM Intl. E2022-99 provided a standard calculation method to generate weighting tables of Table 5 for a non-standard illuminant. No standard procedure is given to calculate weighting tables of Table 6 since it consisted of Venable and Stearns correction weights, and the Venable optimum weight is computed by an iterative procedure.
In this artical, we will report some recent progress in generating weighting tables; compare the performances among the weighting tables such as ASTM Intl. Tables of Table 5 and Table 6, Optimum weighting tables, Least Square weighting tables, and Direct selection tables; quantify the possible colorimetric errors for each of the tables; and finally recommend for standardization of a method for generating weighting tables.
In industrial practice, it is often required that weighting tables were prepared in advance and tristimulus values can then be directly computed using summation of the products of the weights and measured reflectance values. The CIE has never provided precise procedure to calculate the weighting tables, and various discrepant methods have been used. Hence it is possible to obtain significantly different tristimulus values from the same set of spectral data.
In order to overcome this problem, the American Society for Testing and Materials (ASTM Intl.) has published two sets of weighting tables known as Table 5 and Table 6 respectively. Each set includes 36 weighting tables covering 9 illuminants and two standard colorimetric observers at two wavelength intervals (10-nm and 20-nm). The weighting tables of Table 5 must be used with the reflectance corrected using the Stearns and Stearns (SS) method, and weighting tables of Table 6 must be used with the measured reflectance values without the SS correction. In practice, the illuminant used may be different from the CIE standard illuminants and users have to prepare their own weighting tables corresponding to the illuminant actually used. ASTM Intl. E2022-99 provided a standard calculation method to generate weighting tables of Table 5 for a non-standard illuminant. No standard procedure is given to calculate weighting tables of Table 6 since it consisted of Venable and Stearns correction weights, and the Venable optimum weight is computed by an iterative procedure.
In this article, we will report some recent progress in generating weighting tables; compare the performances among the weighting tables such as ASTM Intl. Tables of Table 5 and Table 6, Optimum weighting tables, Least Square weighting tables, and Direct selection tables; quantify the possible colorimetric errors for each of the tables; and finally recommend for standardization of a method for generating weighting tables.
The ability of gamut mapping algorithms to handle a wide range of relative gamut volumes was evaluated. Five gamut mapping algorithms were tested on reproduction media ranging from glossy, coated paper to newsprint. Original media were photographic transparency and print, and CRT.
The psychophysical results indicate that the performance of gamut mapping algorithms is not greatly dependent on gamut volume of either original or reproduction media. Those algorithms which apply a linear scaling of lightness between original and reproduction are more consistent in their performance across different image types and reproduction media. The methods which performed best tend to be those that give more emphasis to preserving lightness over chroma.
Experiments were conducted to investigate colour appearance under mesopic vision. Lightness, colourfulness and hue observations of 40 test colours were accumulated for eight phases with four different luminance levels covering 0.1 to 90 cd/m<sup>2</sup> and two different stimulus sizes corresponding to viewing angles of 2° and 10° using the magnitude estimation method. The psychophysical effects of luminance level and patch size on colour appearance were investigated and the role of the rods under mesopic vision was explored.
With the strong demand from the imaging industry, the CIE Division 8 Image Technology was established in November 1997. This young and dynamic division is aimed to study procedures and prepare guides and standards for the optical, visual and metrological aspects of the communication, processing, and reproduction of images, using all types of analogue and digital devices, storage media and imaging media. It is a servant for the imaging industry to achieve successful colour practice using the knowledge of colour science and colour engineering. There are six CIE Division 8 Technical Committees (TC): TC 8-01 Colour Appearance Modeling for Colour Management Applications, TC 8-02 Colour Difference Evaluation in Images, TC 8-03 Gamut Mapping, TC 8-04 Adaptation under Mixed Illumination Conditions, TC 8-05 Communication of Colour Information, and TC 8-06 Image Technology Vocabulary. This paper will introduce the aims and activities in each TC.
This paper describes work investigating a suitable color quality control method for metallic coatings. A set of psychological experiments was carried out based upon 50 pairs of samples. The results were used to test the performance of various color difference formulae. Different techniques were developed by optimising the weights and/or the lightness parametric factors of colour differences calculated from the four measuring angles. The results show that the new techniques give a significant improvement compared to conventional techniques.
The CIE Technical Committee TC 1-47 Hue and Lightness Dependent Correction to Industrial Colour Difference Evaluation was established in October 1998 and its aim was to improve the performance of the CIE94 color-difference formula. As a result of close collaboration between the TC members, the CIE 2000 color difference formula, CIEDE2000, was developed within two years. This paper describes the development of this formula.
Psychophysical experiments were carried out for describing color appearance under 12 different sets of viewing conditions including variations of neutral backgrounds, sample sizes, textures, sample types and color attributes. The results on saturation based upon 132 colored cube samples are discussed here. It was found that observers can be trained to scale saturation with a great accuracy. There is little difference in the results for the white, gray and black backgrounds studied. Saturation is dependent upon lightness and colorfulness, i.e. an increase in saturation will increase colorfulness but with a reduction of lightness. It was also found that the saturation scale of CIECAM97s did not fit well to the visual results. An improved scale was developed.
Three chromatic adaptation transforms; CMCCAT97, CMCCAT2000 and CIECAT94, and the S-LMS mixed adaptation model were tested for cross-media colour reproduction under mixed illuminants. For each model, the adaptation ratio was varied to represent the state of adaptation. Printed complex images were used as originals. A series of softcopy reproduction pairs was displayed on a CRT with a D93 white point. Observers compared the hardcopy with a given pair of softcopies simultaneously and decide which reproduction was the closer color match. The experiments were divided into three phases according to the ambient lights in the experimental room. They varied from D50 simulators, Illuminant A and Cool-white Fluorescence at similar luminance level to that of the CRT. The results clearly showed that in most cases, the adaptation ratio of 0.4 was best for all models. It was also found that the adaptation ratio was not dependent on the color temperature of the ambient light and image content. The CMCCAT2000 performed relatively well in all cases.
An experiment was carried out using CRT colors. The stimuli were selected along 24 vectors in CIELAB color space. The data was used to test various color difference formulae and uniform color spaces. The results show that there are some discrepencies between CIELAB space and experimental data. The results also suggest that there are three types of color models according to the data used to develop these models: small color-difference, large color-difference and Munsell data.
This paper describes a new uniform color space which was derived by modifying CIECAM97s to fit available large color difference datasets including CII-Zhu, OSA, Guan, BFDB- Textile, BFDB-Paint and Munsell. Testing results show that the new color space fit the above experimental datasets better than the current best CIELAB and IPT spaces.
Two sets of color appearance data were accumulated for investigating the difference between LCD projector and LCD self-luminous colors. Psychophysical experiments were conducted using magnitude estimation methods. These colors were viewed against different neutral backgrounds. These data sets were used to test the performance of five color appearance models (CIECAM97s, Hunt94, LLAB, RLAB and CIELAB together with two most recently proposed revisions of CIECAM97s: Fairchild and FC).
An experiment was carried out to evaluate daylight simulators. The color differences of seventy wool metamers were assessed 20 times by a panel of observers under six D65 simulators. The results were used to test various color difference formulae, to evaluate the quality of these simulators and to compare the results between the present method and the CIE method which calculates a metamerism index using virtual metamers.
Faithful color reproduction of digital images requires a reliable measure to compare such images in order to evaluate the reproduction performance. The conventional methods attempt to apply the CIE Colorimetry based color difference equations, s uch as CIELAB, CMC, CIE94 and CIEDE2000, to complex images on a pixel-by-pixel basis, and calculates the overall color difference as the averaged difference of each pixel in the image. This method is simple and straightforward but often does not represent the color difference perceived by human visual system. This paper proposes a new algorithm for calculating the overall color difference between complex images. The results show that this new metric corresponds more closely to the color difference perceived by human visual system.
CRT displays and prints are the predominant media in color reproduction. This study investigates the color difference thresholds under the viewing conditions of cross-media color reproduction, for which the pictoral prints presented in a viewing cabinet were compared with their reproductions displayed on a CRT monitor. The results showed that the current experimental results agree reasonably well with those found in earlier studies. Observers are more tolerant of lightness changes and more sensitive to hue changes. The acceptability thresholds are image-dependent and also affected by the transformation functions used.
An experiment was carried out to investigate the crispening effect on lightness differences. Thirty-nine neutral lightness-difference pairs were generated using CRT colors. The experiment was divided into sixteen phases according to differetn physical arrangements for each sample pair such as different sample separations, sizes, and background colors. The results show that the crispening effect does exist but is highly dependent upon the particular viewing parameters.
Gamut mapping algorithms were tested in a transparency-to- newsprint workflow, using features derived from studies of empirical mappings in high-quality color reproductions. The experiment compared different methods of determining the achromatic convergence point in simulations lightness-chroma compression, and compared linear compression against a non- linear distance-weighted compression. Algorithms whose convergence points were dependent on the lightness and chroma of the cusp and of the color being mapping performed better than those with fixed convergence points. The models using non-linear compression were strongly preferred over one using linear compression.
A hybrid adaptive system incorporating linear regression and neural network has been developed for the correction of color measuring errors. The linear regression model is used to correct systematic errors while the neural network is used to correct the residue errors that the linear regression method is unable to remove. We use standard color materials from the National Physical Laboratory (NPL) as training samples and test the method using a variety of colors outside the training set. Experimental results are presented which show promising future of neural networks in color measuring industries.
A novel color image coding technique based on visual patterns is presented. Visual patterns, a concept first introduced by Chen and Bovik, are image blocks representing visually meaningful information. A method has been developed to extend the concept of visual patterns (originally developed for grayscale images) to color image coding. A mapping criterion has been developed to map small image blocks to a set of predefined, universal visual patterns in a uniform color space. Source coding and color quantization are applied to achieve efficient coding. Compression ratios between 40:1 and 60:1 (0.6 - 0.4 bpp) have been achieved; subjective as well as objective measures show that the new method is comparable to state-of-the-art techniques such as JPEG.
This paper describes a neural network based method to improve inter-instrument agreement. For each instrument, a three-layer feed-forward neural network was trained using standard reference materials with known reflectance values. The BCRA- NPL tiles were measured by each instrument. The neural network models were derived to correct the measured data in agreement with those measured by the CERAM (standard). Twelve BCRA-NPL tiles were used for training and 32 glossy paint samples selected from OSA Uniform Color Scales were used to test the method. Experimental results for two different spectrophotometers are presented which show good improvement in inter-instrument agreement for both the training and testing samples.
An ideal system of colorimetry should provide measures agreeing to what we see in three respects: color specification, difference and appearance. A successful method to quantify these measures depends upon the reliability of psychophysical experimental data. These data sets have been accumulated and were used to derive the LLAB model. The model includes two parts: a chromatic adaptation transform and a uniform color space. Tristimulus values under a particular set of illuminant/observer conditions are transformed to those of D65/2 degree(s) conditions via a chromatic adaptation transform. A modified version of CIELAB is then used to calculate six perceived attributes: lightness (L<SUB>L</SUB>), redness-greenness (A<SUB>L</SUB>), yellowness-blueness (B<SUB>L</SUB>), colorfulness (C<SUB>L</SUB>), hue angle (h<SUB>L</SUB>) and hue composition (H<SUB>L</SUB>). The model gives similar degree of prediction in comparison with the other state of the art models using the accumulated data sets. The LLAB model demonstrates that it is possible to achieve a system, which provides precise measures to quantify color match, difference and appearance.