We propose a complete digital camera workflow to capture and render
high dynamic range (HDR) static scenes, from RAW sensor data to an
output-referred encoded image. In traditional digital camera
processing, demosaicing is one of the first operations done after
scene analysis. It is followed by rendering operations, such as
color correction and tone mapping. In our workflow, which is based
on a model of retinal processing, most of the rendering steps are
performed before demosaicing. This reduces the complexity of the
computation, as only one third of the pixels are processed. This is
especially important as our tone mapping operator applies local and
global tone corrections, which is usually needed to well render high
dynamic scenes. Our algorithms efficiently process HDR images with
different keys and different content.
If multiple images of a scene are available instead of a single image, we can use the additional information
conveyed by the set of images to generate a higher quality image. This can be done along multiple dimensions.
Super-resolution algorithms use a set of shifted and rotated low resolution images to create a high resolution
image. High dynamic range imaging techniques combine images with different exposure times to generate an
image with a higher dynamic range. In this paper, we present a novel method to combine both techniques and
construct a high resolution, high dynamic range image from a set of shifted images with varying exposure times.
We first estimate the camera response function, and convert each of the input images to an exposure invariant
space. Next, we estimate the motion between the input images. Finally, we reconstruct a high resolution, high
dynamic range image using an interpolation from the non-uniformly sampled pixels. Applications of such an
approach can be found in various domains, such as surveillance cameras, consumer digital cameras, etc.
Proc. SPIE. 6492, Human Vision and Electronic Imaging XII
KEYWORDS: Image processing algorithms and systems, Light emitting diodes, Image segmentation, Linear filtering, LCDs, Image filtering, High dynamic range imaging, Associative arrays, Binary data, RGB color model
We address the problem of re-rendering images to high dynamic range (HDR) displays, which were originally
tone-mapped to standard displays. As these new HDR displays have a much larger dynamic range than standard
displays, an image rendered to standard monitors is likely to look too bright when displayed on a HDR monitor.
Moreover, because of the operations performed during capture and rendering to standard displays, the specular
highlights are likely to have been clipped or compressed, which causes a loss of realism. We propose a tone
scale function to re-render images first tone-mapped to standard displays, that focuses on the representation
of specular highlights. The shape of the tone scale function depends on the segmentation of the input image
into its diffuse and specular components. In this article, we describe a method to perform this segmentation
automatically. Our method detects specular highlights by using two low-pass filters of different sizes combined
with morphological operators. The results show that our method successfully detects small and middle sized
specular highlights. The locations of specular highlights define a mask used for the construction of the tone scale
function. We then propose two ways of applying the tone scale, the global version that applies the same curve
to each pixel in the image and the local version that uses spatial information given by the mask to apply the
tone scale differently to diffuse and to specular pixels.
Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method
suitably enhances high dynamic range images.