We present in this paper a novel capacitive device that stimulates the touchscreen interface of a smartphone (or of any
imaging device equipped with a capacitive touchscreen) and synchronizes triggering with the DxO LED Universal Timer
to measure shooting time lag and shutter lag according to ISO 15781:2013. The device and protocol extend the time lag
measurement beyond the standard by including negative shutter lag, a phenomenon that is more and more commonly found
The device is computer-controlled, and this feature, combined with measurement algorithms, makes it possible to
automatize a large series of captures so as to provide more refined statistical analyses when, for example, the shutter lag
of “zero shutter lag” devices is limited by the frame time as our measurements confirm.
This paper presents a novel device and algorithms for measuring the different timings of digital cameras shooting both still images and videos. These timings include exposure (or shutter) time, electronic rolling shutter (ERS), frame rate, vertical blanking, time lags, missing frames, and duplicated frames. The device, the DxO LED Universal Timer (or “timer”), is designed to allow remotely-controlled automated timing measurements using five synchronized lines of one hundred LEDs each to provide accurate results; each line can be independently controlled if needed. The device meets the requirements of ISO 15781. Camera timings are measured by automatically counting the number of lit LEDs on each line in still and video images of the device and finding the positions of the LEDs within a single frame or between different frames. Measurement algorithms are completely automated: positional markers on the device facilitate automatic detection of the timer as well as the positions of lit LEDs in the images. No manual computation or positioning is required. We used this system to measure the timings of several smartphones under different lighting and setting parameters.
This article presents a system and a protocol to characterize image stabilization systems both for still images and videos.
It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is
programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on
different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the
texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the
measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured
in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to
obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of
performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel
accuracy to determine a homographic deformation between the current frame and a reference position. This model
describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to
the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as
DSC, DSLR and smartphones.
Digital sensors have obviously invaded the photography mass market. However, some photographers with very high
expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the
eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before
development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital
processing afterwards. Therefore, it is legitimate to evaluate silver halide film "with a digital eye", with the assumption
that processing can be applied as for a digital camera. The article will describe in details the operations we need to
consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the
autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array).
We also describe the protocol that was set, from shooting to scanning. We then present and interpret the results of sensor
response, signal to noise ratio and dynamic range.
Extended depth of field (EDOF) cameras have recently emerged as a low-cost alternative to autofocus lenses. Different
methods, either based on longitudinal chromatic aberrations or wavefront coding have been proposed and have reached
the market. The purpose of this article is to study the theoretical performance and limitation of wavefront coding
approaches. The idea of these methods is to introduce a phase element making a trade-off between sharpness at the
optimal focus position and the variation of the blur spot with respect to the object distance. We will show that there are
theoretical bounds to this trade-off: knowing the aperture and the minimal MTF value for a suitable image quality, the
pixel pitch imposes the maximal depth of field. We analyze the limitation of the extension of the depth of field for pixel
pitch from 1.75μm to 1.1μm, particularly in regards to the increasing influence of diffraction.
We describe the procedure to evaluate the image quality of a camera in terms of texture preservation. We use a
stochastic model coming from stochastic geometry, and known as the dead leaves model. It intrinsically reproduces
occlusions phenomena, producing edges at any scale and any orientation with a possibly low level of contrast. An
advantage of this synthetic model is that it provides a ground truth in terms of image statistics. In particular, its power
spectrum is a power law, as many natural textures. Therefore, we can define a texture MTF as the ratio of the Fourier
transform of the camera picture by the Fourier transform of the original target and we fully describe the procedure to
compute it. We will compare the results with the traditional MTF (computed on a slanted edge as defined in the ISO
12233 standard) and will show that the texture MTF is indeed more appropriate for describing fine detail rendering.
The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The
challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as
luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that
accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the
sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital
processing can correct some flaws (like distortion). Our definition of information takes possible correction into account
and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our
previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras
with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading.
In this paper, we numerically quantify the information capacity of a sensor, by examining the different factors than can
limit this capacity, namely sensor spectral response, noise, and sensor blur (due to fill factor, cross talk and diffraction,
for given aperture). In particular, we compare the effectiveness of raw color space for different kinds of sensors. We also
define an intrinsic notion of color sensitivity that generalizes some of our previous works. We also attempt to discuss
how metamerism can be represented for a sensor.
A method for evaluating texture quality as shot by a camera is presented. It is shown that usual sharpness measurements
are not completely satisfying for this task. A new target based on random geometry is proposed. It uses the so-called
dead leaves model. It contains objects of any size at any orientation and follows some common statistics with natural
images. Some experiments show that the correlation between objectives measurements derived from this target and
subjective measurements conducted in the Camera Phone Image Quality initiative are excellent.
KEYWORDS: Chromatic aberrations, Point spread functions, Digital signal processing, Optical signal processing, Computational imaging, Imaging systems, Cameras, Image processing, Lens design, Modulation transfer functions
In this paper we present an approach to extend the Depth-of-Field (DoF) for cell phone miniature camera by concurrently
optimizing optical system and post-capture digital processing techniques. Our lens design seeks to increase the
longitudinal chromatic aberration in a desired fashion such that, for a given object distance, at least one color plane of the
RGB image contains the in-focus scene information. Typically, red is made sharp for objects at infinity, green for
intermediate distances, and blue for close distances. Comparing sharpness across colors gives an estimation of the object
distance and therefore allows choosing the right set of digital filters as a function of the object distance. Then, by
copying the high frequencies of the sharpest color onto the other colors, we show theoretically and experimentally that it
is possible to achieve a sharp image for all the colors within a larger range of DoF. We compare our technique with other
approaches that also aim to increase the DoF such as Wavefront coding.
In this paper we present an approach to obtain an extended Depth-of-Field (DoF) for cell phone miniature camera by
jointly optimizing optical system and post-capture digital processing techniques. Using a computational imaging
approach, we demonstrate how to increase, to a useful operating range, the effective DoF of a specifically designed fixed
focus lens operating e.g. at f/2.8. This is achieved with a lens design where the longitudinal chromatic aberration has
been increased. This increase is controlled so as to have, for any distance within the extended DoF, at least one colour
channel of a RGB image which contains the in-focus scene information (e.g. high frequencies). By determining the
sharpest colour (for each region in the digital image) and reflecting its sharpness on the others, we show that it is possible
to get a sharp image for all colours through the merged DoF of the three of them. We compare our technique with other approaches that also aimed to increase the DoF such as Wavefront coding.
A general trend in the CMOS image sensor market is for increasing resolution (by having a larger number of pixels)
while keeping a small form factor by shrinking photosite size. This article discusses the impact of this trend on some of
the main attributes of image quality. The first example is image sharpness. A smaller pitch theoretically allows a larger
limiting resolution which is derived from the Modulation Transfer Function (MTF). But recent sensor technologies
(1.75μm, and soon 1.45μm) with typical aperture f/2.8 are clearly reaching the size of the diffraction blur spot. A second
example is the impact on pixel light sensitivity and image sensor noise. For photonic noise, the Signal-to-Noise-Ratio
(SNR) is typically a decreasing function of the resolution. To evaluate whether shrinking pixel size could be beneficial to
the image quality, the tradeoff between spatial resolution and light sensitivity is examined by comparing the image information capacity of sensors with varying pixel size. A theoretical analysis that takes into consideration measured and predictive models of pixel performance degradation and improvement associated with CMOS imager technology scaling, is presented. This analysis is completed by a benchmarking of recent commercial sensors with different pixel
This article explains the cause of the color fringing phenomenon that can be noticed in photographs, particularly on the edges of backlit objects. The nature of color fringing is optical, and particularly related to the difference of blur spots at different wavelengths. Therefore color fringing can be observed both in digital and silver halide photography. The hypothesis that lateral chromatic aberration is the only cause of color fringing is discarded. The factors that can influence the intensity of color fringing are carefully studied, some of them being specific to digital photography. A protocol to measure color fringing with a very good repeatability is described, as well as a mean to predict color fringing from optical designs.
This article proposes new measurements for evaluating the image quality of a camera, particularly on the reproduction of colors. The concept of gamut is usually a topic of interest, but it is much more adapted to output devices than to capture devices (sensors). Moreover, it does not take other important characteristics of the camera into account, such as noise. On the contrary, color sensitivity is a global measurement relating the raw noise with the spectral sensitivities of the sensor. It provides an easy ranking of cameras. To have an in depth analysis of noise vs. color rendering, a concept of Gamut SNR is introduced, describing the set of colors achievable for a given SNR (Signal to Noise Ratio). This representation provides a convenient visualization of what part of the gamut is most affected by noise and can be useful for camera tuning as well.
For a given noise at the photosite level and a given output color space, the spectral sensitivities of a sensor
constrain the color processing and therefore impact the level of noise in the output. In particular, this noise may
be very different from the usually documented photosite noise. A key phenomenon is the appearance of strong
correlations between channels which makes individual channel measures (including the classical signal-to-noise
ratio, SNR) misleading. We evaluate existing chains and isolated sensors by several indicators including the
previously developed color sensitivity. We finally apply this approach to the understanding of good spectral
sensitivities by considering hypothetical spectral sensitivities and simulating their performances.
Software-based grouping of multiplexed video based on video content, as opposed to the signal generated by multiplexers, is described. The method is based on energy minimization approach. The algorithm automatically determines the amount of multiplexed camera views, and then the frames are grouped with respect to camera views. The algorithm is free of any threshold differences between camera views, and does not depend on the presence of quiet zones. The method also compensates for interference noise, local and global motion, are contrast changes.
We define a variational method to perform frame-fusion. The process is in three steps: we first estimate the velocities and occlusions using optical flow and spatial constraint on the velocities based on the L1 norm of the divergence. We then collect non-occluded points from the sequence, and estimate their locations at a chosen time, on which we perform the fusion. From this list of points, we reconstruct the super-frame by minimizing a total variation energy which forces the super-frame to look like each frame of the sequence (after shifting) and select among the least oscillatory solutions. We display some examples.