Can images from professional digital SLR cameras be made equivalent in color using simple colorimetric
characterization? Two cameras were characterized, these characterizations were implemented on a variety of images, and
the results were evaluated both colorimetrically and psychophysically. A Nikon D2x and a Canon 5D were used. The
colorimetric analyses indicated that accurate reproductions were obtained. The median CIELAB color differences
between the measured ColorChecker SG and the reproduced image were 4.0 and 6.1 for the Canon (chart and spectral
respectively) and 5.9 and 6.9 for the Nikon. The median differences between cameras were 2.8 and 3.4 for the chart and
spectral characterizations, near the expected threshold for reliable image difference perception. Eight scenes were
evaluated psychophysically in three forced-choice experiments in which a reference image from one of the cameras was
shown to observers in comparison with a pair of images, one from each camera. The three experiments were (1) a
comparison of the two cameras with the chart-based characterizations, (2) a comparison with the spectral
characterizations, and (3) a comparison of chart vs. spectral characterization within and across cameras. The results for
the three experiments are 64%, 64%, and 55% correct respectively. Careful and simple colorimetric characterization of
digital SLR cameras can result in visually equivalent color reproduction.
As color imaging has evolved through the years, our toolset for understanding has similarly evolved. Research in color
difference equations and uniform color spaces spawned tools such as CIELAB, which has had tremendous success over
the years. Research on chromatic adaptation and other appearance phenomena then extended CIELAB to form the basis
of color appearance models, such as CIECAM02. Color difference equations such as CIEDE2000 evolved to reconcile
weaknesses in areas of the CIELAB space. Similarly, models such as S-CIELAB were developed to predict more
spatially complex color difference calculations between images. Research in all of these fields is still going strong and
there seems to be a trend towards unification of some of the tools, such as calculating color differences in a color
appearance space. Along such lines, image appearance models have been developed that attempt to combine all of the
above models and metric into one common framework. The goal is to allow the color imaging research to pick and
choose the appropriate modeling toolset for their needs.
Along these lines, the iCAM image appearance model framework was developed to study a variety of color imaging
problems. These include image difference and image quality evaluations as well gamut mapping and high-dynamic
range (HDR) rendering. It is important to stress that iCAM was not designed to be a complete color imaging solution,
but rather a starting point for unifying models of color appearance, color difference, and spatial vision. As such the
choice of model components is highly dependent on the problem being addressed. For example, with CIELAB it clearly
evident that it is not necessary to use the associated color difference equations to have great success as a deviceindependent
color space. Likewise, it may not be necessary to use the spatial filtering components of an image
appearance model when performing image rendering.
This paper attempts to shed some light on some of the confusions involved with selecting the desired components for
color imaging research. The use of image appearance type models for calculating image differences, like S-CIELAB
and those recommended by CIE TC8-02 will be discussed. Similarly the use of image appearance for HDR applications,
as studied by CIE TC8-08, will also be examined. As with any large project, the easiest way to success is in
understanding and selecting the right tool for the job.
A psychophysical experiment was performed examining the effect of luminance and chromatic noise on perceived image quality. The noise was generated in a recently developed isoluminant opponent space. 5 spatial frequency octave bands centered at 2, 4, 8, 16, and 32 cycles-per-degree (cpd) of visual angle were generated for each of the luminance, red-green, and blue-yellow channels. Two levels of contrast at each band were examined. Overall there were 30 images and 1 "original" image. Four different image scenes were used in a paired-comparison experiment. Observers were asked to select the image that appears to be of higher quality.
The paired comparison data were used to generate interval scales of image quality using Thurstone's Law of Comparative Judgments. These interval scales provide insight into the effect of noise on perceived image quality. Averaged across the scenes, the original noise-free image was determined to be of highest quality. While this result is not surprising on its own, examining several of the individual scenes shows that adding low-contrast blue-yellow isoluminant noise does not statistically decrease image quality and can result in a slight increase in quality.
The International Commission on Illumination (CIE) is dedicated to providing discussion, information, and guidance in the science and art of light and lighting. The terms of reference of Division 8 of the CIE is “to study procedures and prepare guides and standards for the optical, visual and metrological aspects of the communication, processing, and reproduction of images, using all types of analogue and digital imaging devices, storage media and imaging media.”
Along those lines, Technical Committee (TC) 8-08 is tasked with developing guidelines and testing methods for using spatial or image appearance models, specifically for use with High Dynamic Range (HDR) images. The goal of TC8-08 is not to create a CIE recommended image appearance model, but rather to design and conduct experimental techniques for evaluating these models.
Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.
Traditional color appearance modeling has recently matured to the point that available, internationally-recommended models such as CIECAM02 are capable of making a wide range of predictions to within the observer variability in color matching and color scaling of stimuli in somewhat simplified viewing conditions. It is proposed that the next significant advances in the field of color appearance modeling will not come from evolutionary revisions of these models. Instead, a more revolutionary approach will be required to make appearance predictions for more complex stimuli in a wider array of viewing conditions. Such an approach can be considered image appearance modeling since it extends the concepts of color appearance modeling to stimuli and viewing environments that are spatially and temporally at the level of complexity of real natural and man-made scenes. This paper reviews the concepts of image appearance modeling, presents iCAM as one example of such a model, and provides a number of examples of the use of iCAM in still and moving image reproduction.
One goal of image quality modeling is to predict human judgments of quality between image pairs, without needing knowledge of the image origins. This concept can be thought of as device-independent image quality modeling. The first step towards this goal is the creation of a model capable of predicting perceived magnitude differences between image pairs. A modular color image difference framework has recently been introduced with this goal in mind. This framework extends traditional CIE color difference formulae to include modules of spatial vision and adaptation, sharpness detection, contrast detection, and spatial localization. The output of the image difference framework is an error map, which corresponds to spatially localized color differences. This paper reviews the modular framework, and introduces several new techniques for reducing the multi-dimensional error map into a single metric. In addition to predicting overall image differences, the strength of the modular framework is its ability to predict the distinct mechanisms that cause the differences. These mechanisms can be thought of as attributes of image appearance. We examine the individual mechanisms of image appearance, such as local contrast, and compare them with overall perceived differences. Through this process, it is possible to determine the perceptual weights of multi-dimensional image differences. This represents the first stage in the development of an image appearance model designed for image difference and image quality modeling.