To measure the low light performance of today’s cameras has become a challenge . The increasing quality for noise
reduction algorithms and other steps of the image pipe make it necessary to investigate the balance of image quality
aspects. The first step to define a measurement procedure is to capture images under low light conditions using a huge
variety of cameras and review the images as well as the metadata of these images. Image quality parameters that are
known to be affected by low light levels are noise, resolution, texture reproduction, color fidelity, and exposure. For each
of the parameters thresholds below which the images get unacceptable need to be defined. Although this may later on
require a real psychophysical study to increase the precision of the thresholds the current project tries to find out whether
each parameter can be viewed as an independent one or if multiple parameters need to be grouped to differentiate
acceptable images from unacceptable ones. Another important aspect is the definition of camera settings. For example
the longest acceptable exposure time and how this is affected by image stabilization. Cameras on a tripod may produce
excellent images with multi second exposures. After this ongoing analysis the question is how the light level gets
reported? All these aspects are currently collected and will be incorporated into the upcoming ISO 19093 Standard that
defines the measurement procedure for the low light performance of cameras.
The major difference between a dSLR camera, a consumer camera, and a camera in a mobile device is the sensor size. The sensor size is also related to the over all system size including the lens. With the sensors getting smaller the individual light sensitive areas are also getting smaller leaving less light falling onto each of the pixels. This effect requires higher signal amplification that leads to higher noise levels or other problems that may occur due to denoising algorithms. These Problems become more visible at low light conditions because of the lower signal levels. The fact that the sensitivity of cameras decreases makes customers ask for a standardized way to measure low light performance of cameras. The CEA (Consumer Electronics Association) together with ANSI has addressed this for camcorders in the CEA-639  standard. The ISO technical committee 42 (photography) is currently also thinking about a potential standard on this topic for still picture cameras. This paper is part of the preparation work for this standardization activity and addresses the differences compared to camcorders and also potential additional problems with noise reduction that have occurred over the past few years. The result of this paper is a proposed test procedure with a few open questions that have to be answered in future work.
The pixel race in the digital camera industry and for mobile phone imaging modules have made noise reduction
a significant part in the signal processing. Depending on the used algorithms and the underlying amount of noise
that has to be removed, noise reduction leads to a loss of low contrast fine details, also known as texture loss.
The description of these effects became an important part of the objective image quality evaluation in the last
years, as the established methods for noise and resolution measurement fail to do so. Different methods have
been developed and presented, but could not fully satisfy the requested stability and correlation with subjective
tests. In our paper, we present our experience with the current approaches for texture loss measurement. We
have found a critical issue within these methods: the used targets are neutral in color. We could show that the
test-lab results do not match the real live experience with the cameras under test. We present an approach using
a colored target and our experience with this method.
Many times when digital cameras are tested time is short. May it be because the test is performed on a production line
where productivity is one of the most important aspects or in a lab where a schedule needs to be met. To reduce testing
time one of the important aspects is to reduce the amount of images that need to be taken and analyzed. This can be
achieved by combining different features into a single test target.
Another reason to use combined targets is to eliminate any shot to shot variations in exposure, focus setting, or image
processing. Last but not least a single target may be cheaper than buying several targets for the different image quality
But when different features are incorporated into a single chart a couple of questions arise. Can the different aspects be
determined independently? Is it better to use a transparent or a reflective target? Can e.g. a reflective chart with its
limited contrast be used to measure the dynamic range of a camera? This paper discusses the pros and cons of combined
test charts. It describes the related possibilities and problems for the main aspects of image quality like OECF,
resolution, color reproduction, shading, and distortion. Application specific possibilities are studied and summarized.
Noise reduction in the image processing pipeline of digital cameras has a huge impact on image quality. It may
result in loss of low contrast fine details, also referred to as texture blur.Previous papers have shown, that the
objective measurement of the statistical parameter kurtosis in a reproduction of white Gaussian noise with the
camera under test correlates well with the subjective perception of these ramifications. To get a more detailed
description of the influence of noise reduction on the image, we compare the results of different approaches
to measure the spatial frequency response (SFR). Each of these methods uses a different test target, therefore
we get different results in the presence of adaptive filtering. We present a study on the possibility to derive a
detailed description of the influence of noise reduction on the different spatial frequency sub bands based on the
differences of the measured SFR using several approaches. Variations in the underlying methods have a direct
influence on the derived measurements, therefore we additionally checked for the differences of all used methods.
To measure the spectral response of digital cameras is usually a time-consuming and expensive task. One method to gain
the spectral response data is the use of reflectance charts and estimation algorithms. To improve the quality of the
measurement narrow-band light is necessary. Usually an expensive and complicated monochromator is used to generate
the narrow-band light.
This paper proposes the use of a set of narrow-band interference filters as an alternative to a monochromator. It describes
the measurement setup and data processing. A detailed quality assessment of the measurement data shows, that the
quality is comparable to a measurement with a monochromator. The interference filter equipment is more affordable,
easier to use and faster. The characterization of one device takes less than 10 minutes. The pros and cons compared to
other methods are also discussed.
The setup consists of a set of 39 narrow-band interference filters, which are photographed one after another. A modified
slide projector is used for illumination. Software was developed to read the camera's response to the filter and process
We present a method to improve the validity of noise and resolution measurements on digital cameras. If
non-linear adaptive noise reduction is part of the signal processing in the camera, the measurement results for
image noise and spatial resolution can be good, while the image quality is low due to the loss of fine details
and a watercolor like appearance of the image. To improve the correlation between objective measurement and
subjective image quality we propose to supplement the standard test methods with an additional measurement
of the texture preserving capabilities of the camera. The proposed method uses a test target showing white
Gaussian noise. The camera under test reproduces this target and the image is analyzed. We propose to use
the kurtosis of the derivative of the image as a metric for the texture preservation of the camera. Kurtosis
is a statistical measure for the closeness of a distribution compared to the Gaussian distribution. It can be
shown, that the distribution of digital values in the derivative of the image showing the chart becomes the more
leptokurtic (increased kurtosis) the stronger the noise reduction has an impact on the image.
In modern digital still cameras, noise-reduction is a more and more important issue of signal processing, as the customers demand for higher pixel counts and for increased light sensitivity. In recent years, with pixel counts of ten or more megapixel in a compact camera, the images lack more and more of fine details and appear degraded. The standard test-methods for spatial resolution fail to describe this phenomenon, because due to extensive adaptive image enhancements, the camera cannot be treated as a linear position-invariant-system. In this paper we compare established resolution test methods and present new approaches to describe the influence of noise reduction on images.
A new chart is introduced which consists of nine siemens stars, a multi-modulation set of slanted edges and Gaussian white noise as camera target. Using this set, the standard methods known as SFR-Siemens and SFR-Edge are calculated together with additional information like edge-width and edge-noise. Based on the Gaussian white noise, several parameters are presented as an alternative to describe the spatial resolution on low-contrast content.
The analysis of images has always been an important aspect in the quality enhancement of photographs and photographic equipment. Due to the lack of meta data it was mostly limited to images taken by experts under predefined conditions and the analysis was also done by experts or required psychophysical tests. With digital photography and the EXIF1 meta data stored in the images, a lot of information can be gained from a semiautomatic or automatic image analysis if one has access to a large number of images. Although home printing is becoming more and more popular, the European market still has a few photofinishing companies who have access to a large number of images. All printed images are stored for a certain period of time adding up to several million images on servers every day. We have utilized the images to answer numerous questions and think that these answers are useful for increasing image quality by optimizing the image processing algorithms. Test methods can be modified to fit typical user conditions and future developments can be pointed towards ideal directions.
Edition 2 of ISO 12233, Resolution and Spatial Frequency Response (SFR) for Electronic Still Picture Imaging, is likely
to offer a choice of techniques for determining spatial resolution for digital cameras different from the initial standard.
These choices include 1) the existing slanted-edge gradient SFR protocols but with low contrast features, 2) polar coordinate sine wave SFR technique using a Siemens star element, and 3) visual resolution threshold criteria using a continuous linear spatial frequency bar pattern features. A comparison of these methods will be provided. To establish the level of consistency between the results of these methods, theoretical and laboratory experiments were performed by members of ISO TC42/WG18 committee. Test captures were performed on several consumer and SLR digital cameras using the on-board image processing pipelines. All captures were done in a single session using the same lighting conditions and camera operator. Generally, there was good conformance between methods albeit with some notable differences. Speculation on the reason for these differences and how this can be diagnostic in digital camera evaluation will be offered.
Image stabilization in digital imaging continuously gains in importance. This fact is responsible for the increasing interest in the benefits of the stabilizing systems. The existing standards provide neither binding procedures nor recommendations for the evaluation. This paper describes the development and implementation of a test setup and a test procedure for qualitative analysis of image stabilizing systems under reproducible, realistic conditions. The basis for these conditions is provided by the studies of physiological properties of human handshake and the functionality of modern stabilizing systems.
Many luminance measuring tasks require a luminance distribution of the total viewing field. The approach of imageresolving
luminance measurement, which could benefit from the continual development of position-resolving radiation
detectors, represents a simplification of such measuring tasks.
Luminance measure cameras already exist which are specially manufactured for measuring tasks with very high
requirements. Due to high-precision solutions these cameras are very expensive and are not commercially viable for
many image-resolving measuring tasks. Therefore, it is desirable to measure luminance with digital still cameras which
are freely available at reasonable prices.
This paper presents a method for the usage of digital still cameras as luminance meters independent of the exposure
settings. A calibration of the camera is performed with the help of an OECF (opto-electronic conversion function)
measurement and the luminance is calculated with the camera's digital RGB output values. The test method and
computation of the luminance value irrespective of exposure variations is described. The error sources which influence
the result of the luminance measurement are also discussed.
The resolution of a digital camera is defined as its ability to reproduce fine detail in an image. To test this ability
methods like the Slanted Edge SFR measurement developed by Burns and Williams1 and standardized in ISO 122332
are used. Since this method is - in terms of resolution measurements - only applicable to unsharpened and
uncompressed data an additional method described in this paper had to be developed.
This method is based on a Sinusoidal Siemens Star which is evaluated on a radius by radius or frequency by frequency
basis. For the evaluation a freely available runtime program developed in MATLAB is used which creates the MTF of a
camera system as the contrast over the frequency.
Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not
quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated
which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO
15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity
functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al3 which simulates human vision
approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000.
With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with
which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different
threshold images should get the same 'noise value'.
The visual noise measurement model results in similar visual noise values for all the threshold images. The method is
reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement
model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial
content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but
with some limitations: for images which contain spatial structures besides the noise, the colour difference varies
depending on the contrast of the spatial content.
The quality of digital cameras has undergone a magnificent development during the last 10 years. So have the methods to evaluate the quality of these cameras. At the time the first consumer digital cameras were released in 1996, the first ISO standards on test procedures were already on their way. At that time the quality was mainly evaluated using a visual analysis of images taken of test charts as well as natural scenes. The ISO standards lead the way to a couple of more objective and reproducible methods to measure characteristics such as dynamic ranges, speed, resolution and noise. This paper presents an overview of the camera characteristics, the existing evaluation methods and their development during the last years. It summarizes the basic requirements for reliable test methods, and answers the question of whether it is possible to test cameras without taking pictures of natural scenes under specific lighting conditions. In addition to the evaluation methods, this paper mentions the problems of digital cameras in the past concerning power consumption, shutter lag, etc. It also states existing deficits which need to be solved in the future such as optimized exposure and gamma control, increasing sensitivity without increasing noise, and the further reduction of shutter lag etc.
Manufacturers of mobile phones are seeking a default procedure to test the quality of mobile phone cameras. This paper presents such a default procedure based as far as possible on ISO standards and adding additional useful information based on easy to handle methods. In addition to this paper, which will be a summary of the measured values with a brief description on the methods used to determine these values, a white paper for the complete procedure will be available.
SC1058: Image Quality and Evaluation of Cameras In Mobile Devices
Digital and mobile imaging camera system performance is determined by a combination of sensor characteristics, lens characteristics, and image-processing algorithms. As pixel size decreases, sensitivity decreases and noise increases, requiring a more sophisticated noise-reduction algorithm to obtain good image quality. Furthermore, small pixels require high-resolution optics with low chromatic aberration and very small blur circles. Ultimately, there is a tradeoff between noise, resolution, sharpness, and the quality of an image.
This short course provides an overview of "light in to byte out" issues associated with digital and mobile imaging cameras. The course covers, optics, sensors, image processing, and sources of noise in these cameras, algorithms to reduce it, and different methods of characterization. Although noise is typically measured as a standard deviation in a patch with uniform color, it does not always accurately represent human perception. Based on the "visual noise" algorithm described in ISO 15739, an improved approach for measuring noise as an image quality aspect will be demonstrated. The course shows a way to optimize image quality by balancing the tradeoff between noise and resolution. All methods discussed will use images as examples.
SC871: Noise, Image Processing, and their Influence on Resolution
Digital imaging system resolution is determined by a combination of sensor characteristics, lens characteristics, and image-processing algorithms. As pixel size decreases, sensitivity decreases and noise increases, requiring a more sophisticated noise-reduction algorithm to obtain good image quality. Furthermore, small pixels require high-resolution optics with low chromatic aberration and very small blur circles. Ultimately, there is a tradeoff between noise, resolution, sharpness, and the quality of an image.
This short course summarizes the sources of noise, algorithms to reduce it, and different methods of characterization. Although noise is typically measured as a standard deviation in a patch with uniform color, it does not always accurately represent human perception. Based on the "visual noise" algorithm described in ISO 15739, an improved approach for measuring noise as an image quality aspect will be demonstrated. The course shows a way to optimize image quality by balancing the tradeoff between noise and resolution. All methods discussed will use images as examples.
SC753: The Image Pipeline and How It Influences Quality Measurements Based on Existing ISO Standards
When a digital image is captured using a digital still camera, DSC, it needs to be processed. For consumer cameras this processing is done within the camera and covers various steps like dark current subtraction, flare compensation, shading and color compensation, demosaicing, white balancing, tonal and color correction, sharpening, and compression. All of these steps have a significant influence on image quality so it is important to know how image quality can be measured and what standardized methods exist.
The course provides the basic methods for each step of the imaging pipeline. While we run several images through a sample pipeline we will alter the algorithms to discover the visual differences and the differences in the measured values using the various test methods. This helps to understand the process and provides a lot of information on how to increase the over all image quality. The course topics include basic review of the image processing pipeline; explanation of the different steps and their basic algorithms; practical image processing using sample images and software; introduction to image quality analysis; discussion on test scenes and visual image analysis; measurement of different image quality aspects like OECF, Dynamic Range, Noise, Resolution, Color Reproduction; explanation of the available free and commercial software; and demonstration of illuminator, test chart, and software based measurements.
SC870: Color Processing and its Characterisation for Digital Photography
When an image is captured using a digital imaging device, it needs to be rendered. For consumer cameras this processing is done within the camera, and covers various steps like dark current subtraction, flare compensation, shading and color compensation, demosaicing, white balancing, tonal and color correction, sharpening, and compression. All of these steps have a significant influence on image quality, so to design and tune these algorithms it is important to know how image quality can be measured and what standardized methods exist as well as their pros and cons.
The course provides the basic methods for all steps of the imaging pipeline which involve color. Participants will get to examine the basic algorithms that exist and evaluate images processed through a sample pipeline. We will see how image data influences color transforms and white balance. This helps to understand the process and provides substantial information on how to increase the overall image quality. Finally, we will look at how non-ideal hardware affects the quality of the output image. Examples include non-ideal spectral filters, sensor crosstalk, spectral responsivity mismatch, etc.