Camera objective characterization methodologies are widely used in the digital camera industry. Most objective
characterization systems rely on a chart with specific patterns, a software algorithm measures a degradation or difference
between the captured image and the chart itself.
The Spatial Frequency Response (SFR) method, which is part of the ISO 12233<sup>1</sup> standard, is now very commonly used
in the imaging industry, it is a very convenient way to measure a camera Modulation transfer function (MTF). The SFR
algorithm can measure frequencies beyond the Nyquist frequency thanks to super-resolution, so it does provide useful
information on aliasing and can provide modulation for frequencies between half Nyquist and Nyquist on all color
channels of a color sensor with a Bayer pattern. The measurement process relies on a chart that is simple to manufacture:
a straight transition from a bright reflectance to a dark one (black and white for instance), while a sine chart requires
handling precisely shades of gray which can also create all sort of issues with printers that rely on half-toning. However,
no technology can create a perfect edge, so it is important to assess the quality of the chart and understand how it affects
the accuracy of the measurement.
In this article, I describe a protocol to characterize the MTF of a slanted edge chart, using a high-resolution flatbed
scanner. The main idea is to use the RAW output of the scanner as a high-resolution micro-densitometer, since the signal
is linear it is suitable to measure the chart MTF using the SFR algorithm. The scanner needs to be calibrated in
sharpness: the scanner MTF is measured with a calibrated sine chart and inverted to compensate for the modulation loss
from the scanner. Then the true chart MTF is computed. This article compares measured MTF from commercial charts
and charts printed on printers, and also compares how of the contrast of the edge (using different shades of gray) can
affect the chart MTF, then concludes on what distance range and camera resolution the chart can reliably measure the
Digital sensors have obviously invaded the photography mass market. However, some photographers with very high
expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the
eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before
development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital
processing afterwards. Therefore, it is legitimate to evaluate silver halide film "with a digital eye", with the assumption
that processing can be applied as for a digital camera. The article will describe in details the operations we need to
consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the
autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array).
We also describe the protocol that was set, from shooting to scanning. We then present and interpret the results of sensor
response, signal to noise ratio and dynamic range.
The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The
challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as
luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that
accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the
sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital
processing can correct some flaws (like distortion). Our definition of information takes possible correction into account
and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our
previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras
with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading.
We describe the procedure to evaluate the image quality of a camera in terms of texture preservation. We use a
stochastic model coming from stochastic geometry, and known as the dead leaves model. It intrinsically reproduces
occlusions phenomena, producing edges at any scale and any orientation with a possibly low level of contrast. An
advantage of this synthetic model is that it provides a ground truth in terms of image statistics. In particular, its power
spectrum is a power law, as many natural textures. Therefore, we can define a texture MTF as the ratio of the Fourier
transform of the camera picture by the Fourier transform of the original target and we fully describe the procedure to
compute it. We will compare the results with the traditional MTF (computed on a slanted edge as defined in the ISO
12233 standard) and will show that the texture MTF is indeed more appropriate for describing fine detail rendering.
In this paper, we numerically quantify the information capacity of a sensor, by examining the different factors than can
limit this capacity, namely sensor spectral response, noise, and sensor blur (due to fill factor, cross talk and diffraction,
for given aperture). In particular, we compare the effectiveness of raw color space for different kinds of sensors. We also
define an intrinsic notion of color sensitivity that generalizes some of our previous works. We also attempt to discuss
how metamerism can be represented for a sensor.
A method for evaluating texture quality as shot by a camera is presented. It is shown that usual sharpness measurements
are not completely satisfying for this task. A new target based on random geometry is proposed. It uses the so-called
dead leaves model. It contains objects of any size at any orientation and follows some common statistics with natural
images. Some experiments show that the correlation between objectives measurements derived from this target and
subjective measurements conducted in the Camera Phone Image Quality initiative are excellent.
This article explains the cause of the color fringing phenomenon that can be noticed in photographs, particularly on the edges of backlit objects. The nature of color fringing is optical, and particularly related to the difference of blur spots at different wavelengths. Therefore color fringing can be observed both in digital and silver halide photography. The hypothesis that lateral chromatic aberration is the only cause of color fringing is discarded. The factors that can influence the intensity of color fringing are carefully studied, some of them being specific to digital photography. A protocol to measure color fringing with a very good repeatability is described, as well as a mean to predict color fringing from optical designs.
This article proposes new measurements for evaluating the image quality of a camera, particularly on the reproduction of colors. The concept of gamut is usually a topic of interest, but it is much more adapted to output devices than to capture devices (sensors). Moreover, it does not take other important characteristics of the camera into account, such as noise. On the contrary, color sensitivity is a global measurement relating the raw noise with the spectral sensitivities of the sensor. It provides an easy ranking of cameras. To have an in depth analysis of noise vs. color rendering, a concept of Gamut SNR is introduced, describing the set of colors achievable for a given SNR (Signal to Noise Ratio). This representation provides a convenient visualization of what part of the gamut is most affected by noise and can be useful for camera tuning as well.
For a given noise at the photosite level and a given output color space, the spectral sensitivities of a sensor
constrain the color processing and therefore impact the level of noise in the output. In particular, this noise may
be very different from the usually documented photosite noise. A key phenomenon is the appearance of strong
correlations between channels which makes individual channel measures (including the classical signal-to-noise
ratio, SNR) misleading. We evaluate existing chains and isolated sensors by several indicators including the
previously developed color sensitivity. We finally apply this approach to the understanding of good spectral
sensitivities by considering hypothetical spectral sensitivities and simulating their performances.
Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some extent on the viewer. While measuring or predicting a camera's image quality as perceived by users can be an overwhelming task, many camera attributes can be accurately characterized with objective measurement methodologies. This course provides an insight on camera models, examining the mathematical models of the three main components of a camera (optics, sensor and ISP) and their interactions as a system (camera) or subsystem (camera at the raw level).
The course describes methodologies to characterize the camera as a system or subsystem (modeled from the individual component mathematical models), including lab equipment, lighting systems, measurements devices, charts, protocols and software algorithms. Attributes to be discussed include exposure, color response, sharpness, shading, chromatic aberrations, noise, dynamic range, exposure time, rolling shutter, focusing system, and image stabilization. The course will also address aspects that specifically affect video capture, such as video stabilization, video codec, and temporal noise.
The course "SC1049 Benchmarking Image Quality of Still and Video Imaging Systems," describing perceptual models and subjective measurements, complements the treatment of camera models and objective measurements provided here.
SC1049: Benchmarking Image Quality of Still and Video Imaging Systems
Because image quality is multi-faceted, generating a concise and relevant evaluative summary of photographic systems can be challenging. Indeed, benchmarking the image quality of still and video imaging systems requires that the assessor understands not only the capture device itself, but also the imaging applications for the system.
This course explains how objective metrics and subjective methodologies are used to benchmark image quality of photographic still image and video capture devices. The course will go through key image quality attributes and the flaws that degrade those attributes, including causes and consequences of the flaws on perceived quality. Content will describe various subjective evaluation methodologies as well as objective measurement methodologies relying on existing standards from ISO, IEEE/CPIQ, ITU and beyond. Because imaging systems are intended for visual purposes, emphasis will be on the value of using objective metrics which are perceptually correlated and generating benchmark data from the combination of objective and subjective metrics.
The course "SC1157 Camera Characterization and Camera Models," describing camera models and objective measurements, complements the treatment of perceptual models and subjective measurements provided here.