Automultiscopic (no glasses, multiview) displays are becoming a viable alternative to 3-D displays with glasses.
However, since these displays require multiple views the needed transmission bit rate as well as storage space
are of concern. In this paper, we describe results of our research on the compression of still multiview images
for display on lenticular or parallax-barrier screens. In one approach, we examine compression of multiplexed
images that, unfortunately, have relatively low spatial correlation and thus are difficult to compress. We also
study compression/decompression of individual views followed by multiplexing at the receiver. However, instead
of using full-resolution views, we apply compression to band-limited and downsampled views in the so-called "N-tile
format". Using lower resolution images is acceptable since multiplexing at the receiver involves downsampling
from full view resolution anyway. We use three standard compression techniques: JPEG, JPEG-2000 and H.264.
While both JPEG standards work with still images and can be applied directly to an N-tile image, H.264, a video
compression standard, requires N images of the N-tile format to be treated as a short video sequence. We present
numerous experimental results indicating that the H.264 approach achieves significantly better performance than
the other three approaches studied.
Intermediate view reconstruction is an essential step in content preparation for multiview 3D displays and freeviewpoint
video. Although many approaches to view reconstruction have been proposed to date, most of them share the need to model and estimate scene depth first, and follow with the estimation of unknown-view texture using this depth and other views. The approach we present in this paper follows this path as well. First, assuming
a reliable disparity (depth) map is known between two views, we present a spline-based approach to unknownview
texture estimation, and compare its performance with standard disparity-compensated interpolation. A distinguishing feature of the spline-based reconstruction is that all virtual views between the two known views can be reconstructed from a single disparity field, unlike in disparity-compensated interpolation. In the second part
of the paper, we concentrate on the recovery of reliable disparities especially at object boundaries. We outline
an occlusion-aware disparity estimation method that we recently proposed; it jointly computes disparities in
visible areas, inpaints disparities in occluded areas and implicitly detects occlusion areas. We then show how
to combine occlusion-aware disparity estimation with spline-based view reconstruction presented earlier, and we
experimentally demonstrate its benefits compared to occlusion-unaware disparity-compensated interpolation.
The current exploration of Mars by the National Aeronautics and Space Administration (NASA) has produced a lot of images of its surface. Two rovers, "Spirit" and "Opportunity", are each equipped with a pair of high-resolution cameras, called "PanCam". While most commercial cameras are sensitive to three spectral bands, typically red (R), green (G) and blue (B), the "PanCam" is sensitive to many more bands since it was designed to deliver additional information to geologists. This is achieved by means of a filter wheel in front of each camera lens. It turns out that slightly different filters are used in both cameras; while the left camera is equipped with red, green and blue filters, among others, the right camera does not have a green filter on its color wheel. Therefore, since the G component of the right image is missing, currently it is not possible to view a 3D image of Mars surface in color. In this paper, we develop a method to reconstruct one missing color component of an image given its remaining color components and all three components of the other image of a stereo pair. The method relies on disparity-compensated prediction. In the first step, a disparity field is estimated using the two available components (R and B). In the second step, the missing component is recovered using disparity-compensated prediction from the same component (G) in the other image of the stereo pair. In ground-truth experiments, we have obtained high PSNR values of the reconstruction error confirming efficacy of the approach. Similar reconstructions using images transmitted by the rovers yield comfortable 3D experience when viewing with shutter glasses.