We propose a system that uses depth information for video stabilization. The system uses 2D-homographies as frame pair transforms that are estimated with keypoints at the depth of interest. This makes the estimation more robust as the points lie on a plane. The depth of interest can be determined automatically from the depth histogram, inferred from user input such as tap-to-focus, or selected by the user; i.e., tap-to-stabilize. The proposed system can stabilize videos on the fly in a single pass and is especially suited for mobile phones with multiple cameras that can compute depth maps automatically during image acquisition.
We present a methodology to compare image sensors with traditional Bayer RGB layouts to sensors with alternative
layouts containing white pixels. We focused on the sensors’ resolving powers, which we measured in the form of a
modulation transfer function for variations in both luma and chroma channels. We present the design of the test chart,
the acquisition of images, the image analysis, and an interpretation of results. We demonstrate the approach at the
example of two sensors that only differ in their color filter arrays. We confirmed that the sensor with white pixels and
the corresponding demosaicing result in a higher resolving power in the luma channel, but a lower resolving power in
the chroma channels when compared to the traditional Bayer sensor.
Skin colors are important for a broad range of imaging applications to assure quality and naturalness. We discuss the impact of various metadata on skin colors in images, i.e. how does the presence of a metadata attribute influence the expected skin color distribution for a given image. For this purpose we employ a statistical framework to automatically build color models from image datasets crawled from the web. We assess both technical and semantic metadata and show that semantic metadata has a more significant impact. This suggests that semantic metadata holds important cues for processing of skin colors. Further we demonstrate that the refined skin color models from our automatic framework improve the accuracy of skin detection.
We present a novel framework for automatically determining whether or not to apply black point compensation
(BPC) in image reproduction. Visually salient objects have a larger influence on determining image quality
than the number of dark pixels in an image, and thus should drive the use of BPC. We propose a simple and
efficient algorithmic implementation to determine when to apply BPC based on low-level saliency estimation.
We evaluate our algorithm with a psychophysical experiment on an image data set printed with or without BPC
on a Canon printer. We find that our algorithm is correctly able to predict the observers' preferences in all cases
when the saliency maps are unambiguous and accurate.
The capacity of a printing system to accurately reproduce details has an impact on the quality of printed images. The ability of a system to reproduce details is captured in its modulation transfer function (MTF). In the first part of this work, we compare three existing methods to measure the MTF of a printing system. After a thorough investigation, we select the method from Jang and Allebach and propose to modify it. We demonstrate that our proposed modification improves the measurement precision and the simplicity of implementation. Then we discuss the advantages and drawbacks of the different methods depending on the intended usage of the MTF and why Jang and Allebach's method best matches our needs. In the second part, we propose to improve the quality of printed images by compensating for the MTF of the printing system. The MTF is adaptively compensated in the Fourier domain, depending both on frequency and local mean values. Results of a category judgment experiment show significant improvement as the printed MTF-compensated images obtain the best scores.
In this paper we compare three existing methods to measure the Modulation Transfer Function (MTF) of a
printing system. Although all three methods use very distinct approaches, the MTF values computed for two of
these methods strongly agree, lending credibility to these methods. Additionally, we propose an improvement to
one of these two methods, initially proposed by Jang & Allebach. We demonstrate that our proposed modification
improves the measurement precision and simplicity of implementation. Finally we discuss the pros and cons of
the methods depending on the intended usage of the MTF.
Premilary experiments have shown that the quality of printed images depends on the capacity of the printing
system to accurately reproduce details.<sup>1</sup> We propose to improve the quality of printed images by compensating
for the Modulation Transfer Function (MTF) of the printing system. The MTF of the printing system is
measured using the method proposed by Jang and Allebach,<sup>2</sup> in which test pages consisting of series of patches
with different 1D sinusoidal modulations (modified to improve the accuracy of the results<sup>3</sup>) are printed, scanned
and analyzed. Then the MTF is adaptively compensated in the Fourier domain, depending both on frequency
and local mean values. Results of a category judgment experiment show significant improvement as the printed
MTF compensated images obtain the best scores.
We explore two recent methods for measuring the Modeling Transfer Function of a printing system<sup>12</sup>. We
investigate the dependency on the amplitude when using the sinusoidal patches of the method proposed in<sup>1</sup> and
show that for too small amplitudes the measurement of the MTF is not trustworthy. For the method proposed
in<sup>2</sup> we discuss the underlying theory and in particular the use of a significance test for a statistical analysis.
Finally we compare both methods with respect our application - the processing and printing of photographic