Proc. SPIE. 10219, Three-Dimensional Imaging, Visualization, and Display 2017
KEYWORDS: Image compression, Digital image processing, Visualization, Image processing, Digital filtering, Digital watermarking, Image quality, Digital imaging, Image filtering, Geometrical optics, 3D visualizations, 3D image processing
Common camera loses a huge amount of information obtainable from scene as it does not record the value of individual rays passing a point and it merely keeps the summation of intensities of all the rays passing a point. Plenoptic images can be exploited to provide a 3D representation of the scene and watermarking such images can be helpful to protect the ownership of these images. In this paper we propose a method for watermarking the plenoptic images to achieve this aim. The performance of the proposed method is validated by experimental results and a compromise is held between imperceptibility and robustness.
We propose to combine the Kinect and the Integral-Imaging technologies for the implementation of Integral Display. The Kinect device permits the determination, in real time, of (x,y,z) position of the observer relative to the monitor. Due to the active condition of its IR technology, the Kinect provides the observer position even in dark environments. On the other hand, SPOC 2.0 algorithm permits to calculate microimages adapted to the observer 3D position. The smart combination of these two concepts permits the implementation, for the first time we believe, of an Integral Display that provides the observer with color 3D images of real scenes that are viewed with full parallax and which are adapted dynamically to its 3D position.
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
We propose the fusion between two concepts that are very successful in the area of 3D imaging and sensing. Kinect technology permits the registration, in real time, but with low resolution, of accurate depth maps of big, opaque, diffusing 3D scenes. Our proposal consists on transforming the sampled depth map, provided by the Kinect technology, into an array of microimages whose position; pitch and resolution are in good accordance with the characteristics of an integral- imaging monitor. By projecting this information onto such monitor we are able to produce 3D images with continuous perspective and full parallax.
Plenoptic cameras capture a sampled version of the map of rays emitted by a 3D scene, commonly known as the Lightfield. These devices have been proposed for multiple applications as calculating different sets of views of the 3D scene, removing occlusions and changing the focused plane of the scene. They can also capture images that can be projected onto an integral-imaging monitor for display 3D images with full parallax. In this contribution, we have reported a new algorithm for transforming the plenoptic image in order to choose which part of the 3D scene is reconstructed in front of and behind the microlenses in the 3D display process.
One of the differences between the near-field integral imaging (NInI) and the far-field integral imaging
(FInI), is the ratio between number of elemental images and number of pixels per elemental image. While
in NInI the 3D information is codified in a small number of elemental images (with many pixels each), in
FInI the information is codified in many elemental images (with only a few pixels each). The later codification
is similar that the one needed for projecting the InI field onto a pixelated display when aimed to
build an InI monitor. For this reason, the FInI cameras are specially adapted for capturing the InI field
with display purposes. In this contribution we research the relations between the images captured in NInI
and FInI modes, and develop the algorithm that permits the projection of NInI images onto an InI monitor.
In multi-view three-dimensional imaging, to capture the elemental images of distant objects, the use of a field-like lens
that projects the reference plane onto the microlens array is necessary. In this case, the spatial resolution of reconstructed
images is determined by the spatial density of microlenses in the array. In this paper we report a simple method,
based on the realization of double snapshots, to double the 2D pixel density of reconstructed scenes. Experiments
are reported to support the proposed approach.
An analysis and comparison of the lateral and the depth resolution in the reconstruction of 3D scenes from images obtained
either with a classical two view stereoscopic camera or with an Integral Imaging (InI) pickup setup is presented.
Since the two above systems belong to the general class of multiview imaging systems, the best analytical tool for the
calculation of lateral and depth resolution is the ray-space formalism, and the classical tools of Fourier information
processing. We demonstrate that InI is the optimum system to sampling the spatio-angular information contained in a