Two weighted compression schemes, Weighted Least Squares (wLS) and Weighted Principal Component Analysis
(wPCA), are compared by considering their performance in minimizing both spectral and colorimetric errors of
reconstructed reflectance spectra. A comparison is also made among seven different weighting functions incorporated
into ordinary PCA/LS to give selectively more importance to the wavelengths that correspond to higher sensitivity in the
human visual system. Weighted compression is performed on reflectance spectra of 3219 colored samples (including
Munsell and NCS data) and spectral and colorimetric errors are calculated in terms of CIEDE2000 and root mean square
errors. The results obtained indicate that wLS outperforms wPCA in weighted compression with more than three basis
vectors. Weighting functions based on the diagonal of Cohen’s R matrix lead to the best reproduction of color
information under both A and D65 illuminants particularly when using a low number of basis vectors.
An important component of camera calibration is to derive a mapping of a camera’s output RGB to a device independent color space such as the CIE XYZ or sRGB6. Commonly, the calibration process is performed by photographing a color chart in a scene under controlled lighting and finding a linear transformation M that maps the chart’s colors from linear camera RGB to XYZ. When the XYZ values corresponding to the color chart’s patches are measured under a reference illumination, it is often assumed that the illumination across the chart is uniform when it is photographed. This simplifying assumption, however, often is violated even in such relatively controlled environments as a light booth, and it can lead to inaccuracies in the calibration. The problem of color calibration under non-uniform lighting was investigated by Funt and Bastani2,3. Their method, however, uses a numerical optimizer, which can be complex to implement on some devices and has a relatively high computational cost. Here, we present an irradiance-independent camera color calibration scheme based on least-squares regression on the unit sphere that can be implemented easily, computed quickly, and performs comparably to the previously suggested technique.
The problem of illumination estimation for color constancy and automatic white balancing of digital color imagery can be viewed as the separation of the image into illumination and reflectance components. We propose using nonnegative matrix factorization with sparseness constraints to separate these components. Since illumination and reflectance are combined multiplicatively, the first step is to move to the logarithm domain so that the components are additive. The image data is then organized as a matrix to be factored into nonnegative components. Sparseness constraints imposed on the resulting factors help distinguish illumination from reflectance. The proposed approach provides a pixel-wise estimate of the illumination chromaticity throughout the entire image. This approach and its variations can also be used to provide an estimate of the overall scene illumination chromaticity.
The performance of the MaxRGB illumination-estimation method for color constancy and
automatic white balancing has been reported in the literature as being mediocre at best;
however, MaxRGB has usually been tested on images of only 8-bits per channel. The question
arises as to whether the method itself is inadequate, or rather whether it has simply been
tested on data of inadequate dynamic range. To address this question, a database of sets of
exposure-bracketed images was created. The image sets include exposures ranging from very
underexposed to slightly overexposed. The color of the scene illumination was determined by
taking an extra image of the scene containing 4 Gretag Macbeth mini Colorcheckers placed at
an angle to one another. MaxRGB was then run on the images of increasing exposure. The
results clearly show that its performance drops dramatically when the 14-bit exposure range of
the Nikon D700 camera is exceeded, thereby resulting in clipping of high values. For those
images exposed such that no clipping occurs, the median error in MaxRGB's estimate of the
color of the scene illumination is found to be relatively small.
Proc. SPIE. 6492, Human Vision and Electronic Imaging XII
KEYWORDS: Prototyping, Databases, Object recognition, Detection and tracking algorithms, Principal component analysis, Electronic imaging, Sensors, Image segmentation, Zoom lenses, Human vision and color perception
Color has been shown to be an important clue for object recognition and image indexing. We present a new
algorithm for color-based recognition of objects in cluttered scenes that also determines the 2D pose of each
object. As with so many other color-based object recognition algorithms, color histograms are also fundamental
to our new approach; however, we use histograms obtained from overlapping subwindows rather than the entire
image. An object from a database of prototypes is identified and located in an input image whenever there
are many good histogram matches between the respective subwindow histograms of the input image and the
image prototype from the database. In essence, local color histograms are the features to be matched. Once an
object's position in the image has been determined, its 2D pose is determined by approximating the geometrical
transformation most consistently mapping the locations of the prototype's subwindows to their matching
locations in the input image.
Experiments using real images are conducted on a variety of color constancy algorithms (Chromagenic, Greyworld,
Max RGB, and a Maloney-Wandell extension called Subspace Testing) in order to determine whether or not extending
the number of channels from 3 to 6 to 9 would enhance the accuracy with which they estimate the scene illuminant
color. To create the 6 and 9 channel images, filters where placed over a standard 3-channel color camera. Although
some improvement is found with 6 channels, the results indicate that essentially the extra channels do not help as much
as might be expected.
Why do the human cones have the spectral sensitivities they do? We hypothesize that they may have evolved to their present form because their sensitivities are optimal in terms of their ability to recover the spectrum of incident light. As evidence in favor of this hypothesis, we compare the accuracy with which the incoming spectrum can be approximated by a three-dimensional linear model based on the cone responses and compare this to the optimal approximations
defined by models based on principal components analysis, independent component analysis, non-negative matrix factorization and non-negative independent component analysis. We introduce a new method of reconstructing spectra from the cone responses and show that the cones are almost as good as these optimal methods in estimating the
The technique of support vector regression (SVR) is applied to the color display calibration problem. Given a set of training data, SVR estimates a continuous-valued function encoding the fundamental interrelation between a given input and its corresponding output. This mapping can then be used to find an output value for a given input value not in the training data set. Here, SVR is applied directly to the display's non-linearized RGB digital input values to predict output CIELAB values. There are several different linear methods for calibrating different display technologies (GOG, Masking and Wyble). An advantage of using SVR for color calibration is that the end-user does not need to apply a different calibration model for each different display technology. We show that the same model can be used to calibrate CRT, LCD and DLP displays accurately. We also show that the accuracy of the model is comparable to that of the optimal linear transformation introduced by Funt et al.
Chromatic adaptation transforms generally rely on a variant of the von Kries transformation-method to account for changes in the LMS cone signals that occur when changing from one illuminant to another. Von Kries adaptation also often referred to as the coefficient rule method or the diagonal transformation method-adjusts the 3 color channels by independent scale factors. Since there generally are only 3 known quantities available, namely the ratio of the cone signals of the two adapting illuminants, a crucial aspect of the von Kries method is that it requires only 3 parameters to be specified. A 9-parameter, 3x3 matrix transformation would be more accurate, but it is generally not possible to determine the extra parameters. This paper presents a novel method of predicting the effect a change of illumination has on the cone signals, while still relying on only 3 parameters. To begin, we create a large set of 3x3 matrices representing illuminant changes based on a sizable database of typical illuminant spectra and surface spectral reflectances. Representing these 3x3 matrices as points in a 9-dimensional space, we then apply principal components analysis to find a 3-dimensional basis which best approximates the original matrix space. To model an illumination change, a 3x3 matrix is constructed using a weighted combination of the 3 basis matrices. The relative weights can be calculated based on the 3 standard cone ratios obtained from the illuminant pair. Tests show that the new method yields better results than von Kries adaptation with or without sensor sharpening.
The recently published Matlab implementation of the retinex algorithm has free parameters for the user to specify. The parameters include the number of iterations to perform at each spatial scale, the viewing angle, image resolution, and the lookup table function (post-lut) to be applied upon completion of the main retinex computation. These parameters were specifically left unspecified in since the previous descriptions of retinex upon which the new Matlab implementations were based do not define them. In this paper we determine values for these parameters based on a best fit to the experimental data provided by McCann et. al.
Proc. SPIE. 4662, Human Vision and Electronic Imaging VII
KEYWORDS: Data conversion, MATLAB, Image processing, Human vision and color perception, Visual process modeling, Detection and tracking algorithms, Data modeling, Image resolution, Image compression, Color vision
Our goal is to specify the retinex model as precisely as possible. The core retinex computation is clearly specified in our recent MATLAB implementation; however, there remain several free parameters which introduce significant variability into the model's predictions. In this paper, we extend previous work on specifying these parameters. In particular, instead of looking for fixed values for the parameters, we establish methods which automatically determine values for them based on the input image. These methods are tested on the McCann-McKee-Taylor asymmetric matching data along with some previously unpublished data that include simultaneous contrast targets.
Bootstrapping provides a novel approach to training a neural network to estimate the chromaticity of the illuminant in a scene given image data alone. For initial training, the network requires feedback about the accuracy of the network's current results. In the case of a network for color constancy, this feedback is the chromaticity of the incident scene illumination. In the past, prefect feedback has been used, but in the bootstrapping method feedback with a considerable degree of random error can be used to train the network instead. In particular, the grayworld algorithm, which only provides modest color constancy performance, is used to train a neural network which in the end performs better than the grayworld algorithm used to train it.
In this paper we introduce a new method for determining the relationship between signal spectra and camera RGB which is required for many applications in color. We work with the standard camera model, which assumes that the response is linear. We also provide an example of how the fitting procedure can be augmented to include fitting for a previously estimated non-linearity. The basic idea of our method is to minimize squared error subject to linear constraints, which enforce positivity and range of the result. It is also possible to constrain the smoothness, but we have found that it is better to add a regularization expression to the objective function to promote smoothness. With this method, smoothness and error can be traded against each other without being restricted by arbitrary bounds. The method is easily implemented as it is an example of a quadratic programming problem, for which there are many software solutions available. In this paper we provide the results using this method and others to calibrate a Sony DXC-930 CCD color video camera. We find that the method gives low error, while delivering sensors which are smooth and physically realizable. Thus we find the method superior to methods which ignore any of these considerations.
Von Kries adaptation has long been considered a reasonable vehicle for color constancy. Since the color constancy performance attainable via the von Kries rule strongly depends on the spectral response characteristics of the human cones, we consider the possibility of enhancing von Kries performance by constructing new `sensors' as linear combinations of the fixed cone sensitivity functions. We show that if surface reflectances are well-modeled by 3 basis functions and illuminants by 2 basis functions then there exists a set of new sensors for which von Kries adaptation can yield perfect color constancy. These new sensors can (like the cones) be described as long-, medium-, and short-wave sensitive; however, both the new long- and medium-wave sensors have sharpened sensitivities -- their support is more concentrated. The new short-wave sensor remains relatively unchanged. A similar sharpening of cone sensitivities has previously been observed in test and field spectral sensitivities measured for the human eye. We present simulation results demonstrating improved von Kries performance using the new sensors even when the restrictions on the illumination and reflectance are relaxed.