In this paper, a transparent flow surface reconstruction based on wave front distortion is investigated. A camera lens is used to focus the image formed by the micro-lens array to the camera imaging plane. The irradiance of the captured image is transformed to frequency spectrum and then the x and y spatial components are separated. A rigid spatial translation followed by low pass filtering yields a single frequency component of the image intensity. Index of refraction is estimated from the inverse Fourier transform of the spatial frequency spectrum of the irradiance. The proposed method is evaluated with synthetic data of a randomly generated index of refraction value and used to visualize a fuel injection volumetric data.
This paper discusses a method to reconstruct a transparent ow surface from single camera shot with the aid of a Micro-lens array. An intentionally prepared high frequency background which is placed behind the refractive flow is captured and a curl-free optical flow algorithm is applied between pairs of images taken by different micro-lenses. The computed raw optical ow vector is a blend of motion parallax and background deformation vector due to the underlying flow. Subtracting the motion parallax, which is obtained by calibration, from the total op- optical flow vector yields the background deformation vector. The deflection vectors on each images are used to reconstruct the flow profile. A synthetic data set of fuel injection was used to evaluate the accuracy of the proposed algorithm and good agreement was achieved between the test and reconstructed data. Finally, real light field data of hot air created by a lighter flame is used to reconstruct and show a hot air plume surface.
This paper focuses on resolving long-standing limitations of parallax barriers by applying formal optimization
methods. We consider two generalizations of conventional parallax barriers. First, we consider general two-layer
architectures, supporting high-speed temporal variation with arbitrary opacities on each layer. Second,
we consider general multi-layer architectures containing three or more light-attenuating layers. This line of
research has led to two new attenuation-based displays. The High-Rank 3D (HR3D) display contains a stacked
pair of LCD panels; rather than using heuristically-defined parallax barriers, both layers are jointly-optimized
using low-rank light field factorization, resulting in increased brightness, refresh rate, and battery life for mobile
applications. The Layered 3D display extends this approach to multi-layered displays composed of compact
volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field when illuminated
by a uniform backlight. We further introduce Polarization Fields as an optically-efficient and computationally efficient
extension of Layered 3D to multi-layer LCDs. Together, these projects reveal new generalizations to
parallax barrier concepts, enabled by the application of formal optimization methods to multi-layer attenuation-based
designs in a manner that uniquely leverages the compressive nature of 3D scenes for display applications.
Many visual difference predictors (VDPs) have used basic psychophysical data (such as ModelFest) to calibrate the
algorithm parameters and to validate their performances. However, the basic psychophysical data often do not contain
sufficient number of stimuli and its variations to test more complex components of a VDP. In this paper we calibrate the
Visual Difference Predictor for High Dynamic Range images (HDR-VDP) using radiologists' experimental data for
JPEG2000 compressed CT images which contain complex structures. Then we validate the HDR-VDP in predicting the
presence of perceptible compression artifacts. 240 CT-scan images were encoded and decoded using JPEG2000
compression at four compression ratios (CRs). Five radiologists participated to independently determine if each image
pair (original and compressed images) was indistinguishable or distinguishable. A threshold CR for each image, at which
50% of radiologists would detect compression artifacts, was estimated by fitting a psychometric function. The CT
images compressed at the threshold CRs were used to calibrate the HDR-VDP parameters and to validate its prediction
accuracy. Our results showed that the HDR-VDP calibrated for the CT image data gave much better predictions than the
HDR-VDP calibrated to the basic psychophysical data (ModelFest + contrast masking data for sine gratings).
Defocus imaging techniques, involving the capture and reconstruction of purposely out-of-focus images, have
recently become feasible due to advances in deconvolution methods. This paper evaluates the feasibility of
defocus imaging as a means of increasing the effective dynamic range of conventional image sensors. Blurring
operations spread the energy of each pixel over the surrounding neighborhood; bright regions transfer energy to
nearby dark regions, reducing dynamic range. However, there is a trade-off between image quality and dynamic
range inherent in all conventional sensors.
The approach involves optically blurring the captured image by turning the lens out of focus, modifying that
blurred image with a filter inserted into the optical path, then recovering the desired image by deconvolution.
We analyze the properties of the setup to determine whether any combination can produce a dynamic range
reduction with acceptable image quality. Our analysis considers both properties of the filter to measure local
contrast reduction, as well as the distribution of image intensity at different scales as a measure of global contrast
reduction. Our results show that while combining state-of-the-art aperture filters and deconvolution methods
can reduce the dynamic range of the defocused image, providing higher image quality than previous methods,
rarely does the loss in image fidelity justify the improvements in dynamic range.
High Dynamic Range displays offer higher brightness, higher contrast, better color reproduction and lower power
consumption compared to conventional displays available today. In addition to these benefits, it is possible to leverage
the unique design of HDR displays to overcome many of the calibration and lifetime degradation problems of liquid
crystal displays, especially those using light emitting diodes. This paper describes a combination of sensor mechanisms
and algorithms that reduce luminance and color variation for both HDR and conventional displays even with the use of
highly variable light elements.
We describe the design of a very high resolution, low-cost scan camera for use in image-based modeling and rendering,
cultural heritage projects, and professional digital photography. Our camera can acquire black&white, color, and nearinfrared
images with a resolution of over 122 million pixels and can be readily built from off-the-shelf components for less
than $1200. We discuss the construction of the system as well as color calibration and noise removal. Finally, we obtain
quantitative measurements of the light sensitivity and the optical resolution of our camera and compare the image quality
to a commercial digital SLR camera.
SC772: High Dynamic Range Techniques: From Acquisition to Display
This course is motivated by tremendous progress in the development and accessibility of high dynamic range technology (HDR) that happened just recently, which creates many interesting opportunities and challenges. The course presents a complete pipeline for HDR image and video processing from acquisition, through compression and quality evaluation, to display. Also, successful examples of the use of HDR technology in research setups and industrial applications are provided. Whenever needed relevant background information on human perception is given which enables better understanding of the design choices behind the discussed algorithms and HDR equipment.