We have previously proposed a framework containing a typical security camera use case and have discussed how
well this is handled by linear image sensors with various characteristics. The findings were visualized graphically,
using a simple camera simulator generating images under well-defined conditions. In order to successfully render
low-contrast objects together with large intra-scene variations in illuminance, the sensor requirements must
include a high dynamic range combined with a comparably high signal-to-noise ratio. In this paper we reuse the
framework and extend the discussion by including also sensors with non-linear pixel responses.
The obvious benefit of a non-linear pixel is that it generally can cope with a higher scene dynamic range and
that in most cases the exposure control can be relaxed. Known drawbacks are, for example, that the noise level
can be fairly high. More specifically, the spatial noise levels are high due to variable pixel-to-pixel characteristics
and lack of on-chip corrections, like correlated double sampling.
In this paper we ignore the spatial noise, since some of the related issues have been addressed recently.
Instead we focus on the temporal noise and dynamic resolution issues involved in non-linear imaging on a system
level. Since the requirements are defined by our selected use case, and since we have defined a visual framework
for analysis, it is straightforward to compare our findings with the results for linear image sensors. As in the
previous paper, the image simulations are based on sensor data obtained from our own measurements.
The I3A Camera Phone Image Quality (CPIQ) initiative aims to provide a consumer-oriented
overall image quality metric for mobile phone cameras. In order to achieve this
goal, a set of subjectively correlated image quality metrics has been developed. This paper
describes the development of a specific group within this set of metrics, the spatial metrics.
Contained in this group are the edge acutance, visual noise and texture acutance metrics.
A common feature is that they are all dependent on the spatial content of the specific
scene being analyzed. Therefore, the measurement results of the metrics are weighted by
a contrast sensitivity function (CSF) and, thus, the conditions under which a particular
image is viewed must be specified. This leads to the establishment of a common framework
consisting of three components shared by all spatial metrics. First, the RGB image is transformed
to a color opponent space, separating the luminance channel from two chrominance
channels. Second, associated with this color space are three contrast sensitivity functions
for each individual opponent channel. Finally, the specific viewing conditions, comprising
both digital displays as well as printouts, are supported through two distinct MTFs.
Image sensor crosstalk can be divided into spectral crosstalk and pixel crosstalk. This paper focuses on the pixel
crosstalk and its effect on signal to noise ratio (SNR). Pixel crosstalk occurs in the spatial domain and is due to
the signal leakage between adjacent pixels either by imperfect optical isolation or diffusion of electrons. This will
have a negative impact on image quality mainly in two ways: spatial blurring and decreased SNR due to more
aggressive color correction required. A method for modeling the spectral broadening due to the pixel crosstalk
is used where a matrix is calculated from crosstalk kernels representing the spatial leakage between neighboring
pixels. In order to quantify the amount of crosstalk we present a method in which ratios of integrals of the same
color channel but within different wavelength intervals are calculated. This provides a metric that is more robust
with respect to color channel scaling. To study the impact on SNR due to pixel crosstalk, a number of SNR
metrics are compared to results from a limited psychophysical study. The studied SNR metrics are the metric
used for calculating the SNR10 value in mobile imaging, the ISO 12232 noise metric and a metric where the
signal is transformed into orthogonal color opponent channels, thereby enabling the analysis of the luminance
noise separate from the chrominance noises. The results indicate that the ISO total noise and SNR10 metric
yield very similar results and that the green channel has the largest individual impact on the crosstalk.
As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g.,
elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration,
the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the
same time, the demands for efficiency in the production environment require the calibration to be as simple
as possible. Thus it is important to find the correct balance between image quality and production efficiency
The purpose of this work is to investigate camera color variations using a simple model where the sensor and
IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was
used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65
and F11. The sensor variations were determined from a very large population from which 6 corner samples were
picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting
images generated by the model were then analyzed in the CIELAB space and color errors were calculated using
the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are
small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that
the color temperature dependence is small enough to justify the use of only one light source in a production
In many situations it is desirable to obtain an image that visually describes measured lens MTF data. Since the
sharpness of a camera lens changes continuously across the field of view, the characteristics of the lens need to
be determined at many positions within the image. In short, the proposed simulation method consists of two
parts. First, the point-spread function (PSF) at a limited number of field positions is constructed using Zernike
polynomials. The polynomial coefficients at a specified field position are determined by fitting the calculated
MTF for these PSFs to the measured MTF data. The other part interpolates Zernike coefficients for all other
relevant positions within the image. In this way it is possible to find a sufficiently accurate PSF at any arbitrary
field point. By utilizing a generalized non-translational invariant summation of PSFs, the sharpness at any field
point in the image can be simulated. This system also has the advantage that the sharpness at different focusing
positions can be determined quite easily. It is also a fairly simple matter to include effects such as distortion and
vignetting. In the present paper, examples of simulations are shown and advantages as well as drawbacks of the
method are discussed.
SC1233: Camera Image Quality Benchmarking
The purpose of this short course is to show that it is possible to compare the image quality of consumer imaging systems in a perceptually relevant manner. Because image quality is multi-faceted, generating a concise and relevant evaluative summary of photographic systems can be challenging. Indeed, benchmarking the image quality of still and video imaging systems requires that the assessor understands not only the capture device itself, but also the imaging applications for the system. This course explains how objective metrics and subjective methodologies are used to benchmark image quality of photographic still image and video capture devices. The course will review key image quality attributes and the flaws that degrade those attributes, including causes and consequences of the flaws on perceived quality. Content will touch on various subjective evaluation methodologies as well as objective measurement methodologies relying on existing standards from ISO, IEEE/CPIQ, ITU and beyond. The course focus is on consumer imaging systems, so the emphasis will be on the value of using objective metrics which are perceptually correlated and generating benchmark data from the combination of objective and subjective metrics.
SC1049: Benchmarking Image Quality of Still and Video Imaging Systems
Because image quality is multi-faceted, generating a concise and relevant evaluative summary of photographic systems can be challenging. Indeed, benchmarking the image quality of still and video imaging systems requires that the assessor understands not only the capture device itself, but also the imaging applications for the system.
This course explains how objective metrics and subjective methodologies are used to benchmark image quality of photographic still image and video capture devices. The course will go through key image quality attributes and the flaws that degrade those attributes, including causes and consequences of the flaws on perceived quality. Content will describe various subjective evaluation methodologies as well as objective measurement methodologies relying on existing standards from ISO, IEEE/CPIQ, ITU and beyond. Because imaging systems are intended for visual purposes, emphasis will be on the value of using objective metrics which are perceptually correlated and generating benchmark data from the combination of objective and subjective metrics.
The course "SC1157 Camera Characterization and Camera Models," describing camera models and objective measurements, complements the treatment of perceptual models and subjective measurements provided here.