The availability of multispectral scene data makes it possible to simulate a complete imaging pipeline for digital
cameras, beginning with a physically accurate radiometric description of the original scene followed by optical
transformations to irradiance signals, models for sensor transduction, and image processing for display. Certain scenes
with animate subjects, e.g., humans, pets, etc., are of particular interest to consumer camera manufacturers because of
their ubiquity in common images, and the importance of maintaining colorimetric fidelity for skin. Typical multispectral
acquisition methods rely on techniques that use multiple acquisitions of a scene with a number of different optical filters
or illuminants. Such schemes require long acquisition times and are best suited for static scenes. In scenes where animate
objects are present, movement leads to problems with registration and methods with shorter acquisition times are
needed. To address the need for shorter image acquisition times, we developed a multispectral imaging system that
captures multiple acquisitions during a rapid sequence of differently colored LED lights. In this paper, we describe the
design of the LED-based lighting system and report results of our experiments capturing scenes with human subjects.
We introduce a new metric, the visible signal-to-noise ratio (vSNR), to analyze how pixel-binning and resizing methods
influence noise visibility in uniform areas of an image. The vSNR is the inverse of the standard deviation of the SCIELAB
representation of a uniform field; its units are 1/ΔE. The vSNR metric can be used in simulations to predict
how imaging system components affect noise visibility. We use simulations to evaluate two image rendering methods:
pixel binning and digital resizing. We show that vSNR increases with scene luminance, pixel size and viewing distance
and decreases with read noise. Under low illumination conditions and for pixels with relatively high read noise, images
generated with the binning method have less noise (high vSNR) than resized images. The binning method has
noticeably lower spatial resolution. The binning method reduces demands on the ADC rate and channel throughput.
When comparing binning and resizing, there is an image quality tradeoff between noise and blur. Depending on the
application users may prefer one error over another.
The surface reflectance function of many common materials varies slowly over the visible wavelength range. For
this reason, linear models with a small number of bases (5-8) are frequently used for representation and estimation
of these functions. In other signal representation and recovery applications, it has been recently demonstrated
that dictionary based sparse representations can outperform linear model approaches. In this paper, we describe
methods for building dictionaries for sparse estimation of reflectance functions. We describe a method for building
dictionaries that account for the measurement system; in estimation applications these dictionaries outperform
the ones designed for sparse representation without accounting for the measurement system. Sparse recovery
methods typically outperform traditional linear methods by 20-40% (in terms of RMSE).
Under low illumination conditions, such as moonlight, there simply are not enough photons present to create a high quality color image with integration times that avoid camera-shake. Consequently, conventional imagers are designed for daylight conditions and modeled on human cone vision. Here, we propose a novel sensor design that parallels the human retina and extends sensor performance to span daylight and moonlight conditions. Specifically, we describe an interleaved imaging architecture comprising two collections of pixels. One set of pixels is monochromatic and high sensitivity; a second, interleaved set of pixels is trichromatic and lower sensitivity. The sensor implementation requires new image processing techniques that allow for graceful transitions between different operating conditions. We describe these techniques and simulate the performance of this sensor under a range of conditions. We show that the proposed system is capable of producing high quality images spanning photopic, mesopic and near scotopic conditions.
We describe a method for simulating the output of an image sensor to a broad array of test targets. The method uses a modest set of sensor calibration measurements to define the sensor parameters; these parameters are used by an integrated suite of Matlab software routines that simulate the sensor and create output images. We compare the simulations of specific targets to measured data for several different imaging sensors with very different imaging properties. The simulation captures the essential features of the images created by these different sensors. Finally, we show that by specifying the sensor properties the simulations can predict sensor performance to natural scenes that are difficult to measure with a laboratory apparatus, such as natural scenes with high dynamic range or low light levels.
Simulation of the imaging pipeline is an important tool for the design and evaluation of imaging systems. One of
the most important requirements for an accurate simulation tool is the availability of high quality source scenes.
The dynamic range of images depends on multiple elements in the imaging pipeline including the sensor, digital
signal processor, display device, etc. High dynamic range (HDR) scene spectral information is critical for an
accurate analysis of the effect of these elements on the dynamic range of the displayed image. Also, typical digital
imaging sensors are sensitive well beyond the visible range of wavelengths. Spectral information with support
across the sensitivity range of the imaging sensor is required for the analysis and design of imaging pipeline
elements that are affected by IR energy. Although HDR scene data information with visible and infrared content
are available from remote sensing resources, there are scarcity of such imagery representing more conventional
everyday scenes. In this paper, we address both these issues and present a method to generate a database of
HDR images that represent radiance fields in the visible and near-IR range of the spectrum. The proposed
method only uses conventional consumer-grade equipment and is very cost-effective.
Inherent to most multi-color printing systems is the inability to achieve perfect registration between the primary
separations. Because of this, dot-on-dot or dot-off-dot halftone screen sets are generally not used, due to the
significant color shift observed in the presence of even the slightest misregistration. Much previous work has
focused on characterizing these effects, and it is well known that dot-off-dot printed patterns result in a higher
chroma (C*) relative to dot-on-dot. Rotated dot sets are used instead for these systems, as they exhibit a much
greater robustness against misregistration. In this paper, we make the crucial observation that while previous
work has used color shifts caused by misregistration to design robust screens, we can infact exploit these color
shifts to obtain estimates of misregistration. In particular, we go on to demonstrate that even low resolution
macroscopic color measurements of a carefully designed test patch can yield misregistration estimates that are
accurate up-to the sub-pixel level. The contributions of our work are as follows: 1.) a simple methodology to
construct test patches that may be measured to obtain misregistration estimates, 2.) derivation of a reflectance
printer model for the test patch so that color deviations in the spectral or reflectance space can be mapped to
misregistration estimates, and 3.) a practical method to estimate misregistration via scanner RGB measurements.
Experimental results show that our method achieves accuracy comparable to the state-of-the art but expensive
geometric methods that are currently used by high-end color printing devices to estimate misregistration.
Digital still cameras typically use a single optical sensor overlaid with RGB color filters to acquire a scene. Only one of the three primary colors is observed at each pixel and the full color image must be reconstructed (demosaicked) from available data. We consider the problem of demosaicking for images sampled in the commonly used Bayer pattern.
The full color image is obtained from the sampled data as a MAP estimate. To exploit the greater sampling rate in the green channel in defining the presence of edges in the blue and red channels, a Gaussian MRF model that considers the presence of edges in all three color channels is used to define a prior. Pixel values and edge estimates are computed iteratively using an algorithm based on Besag's iterated conditional modes (ICM) algorithm. The reconstruction algorithm iterates alternately to perform edge detection and spatial smoothing. The proposed algorithm is applied to a variety of test images and its performance is quantified by using the CIELAB delta E measure.
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are
designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of
the error criterion.