Usual problem in 3D integral-imaging monitors is flipping that happens when the microimages are seen from neighbor microlenses. This effect appears when, at high viewing angles, the light rays emitted by any elemental image are not passing through the corresponding microlens. A usual solution of this problem is to insert and a set of physical barriers to avoid this crosstalk. In this contribution we present a pure optical alternative of physical barriers. Our arrangement is based on Köhler illumination concept, and avoids that the rays emitted by one microimage to impinge the neighbor microlens. The proposed system does not use additional lenses to project the elemental images, so no optical aberrations are introduced.
Determining the precise location of irradiance centroids is a key step for optical triangulation and wavefront sensing
based on wavefront slope measurements (as e.g. in Hartmann-Shack aberrometry). Since most aberrometers include
some kind of optical relay system to reimage the irradiance distributions provided by the wavefront sampling element
onto the irradiance detector, it is esential to ensure that the centroid position and momentum information is preserved
along this operation. In optical systems with ABCD difrraction kernels the centroids propagate according to an effective
geometrical optics rule. However, the presence of finite apertures partially blocking the incoming beam or non-uniform
transmittances unevenly altering its original irradiance distribution may give rise to potentially significant departures
from this simple geometrical picture. The potential magnitude of this bias makes it advisable to take proper steps to
counteract it in the design of aberrometric setups.
Previously, we reported a digital technique for formation of real, non-distorted, orthoscopic integral images by direct
pickup. However the technique was constrained to the case of symmetric image capture and display systems. Here, we
report a more general algorithm which allows the pseudoscopic to orthoscopic transformation with full control over the
display parameters so that one can generates a set of synthetic elemental images that suits the characteristics of the
Integral-Imaging monitor and permits control over the depth and size of the reconstructed 3D scene.
Integral imaging (InI) technology was created with the aim of providing the binocular observers of monitors, or matrix
display devices, with auto-stereoscopic images of 3D scenes. However, along the last few years the inventiveness of
researches has allowed to find many other interesting applications of integral imaging. Examples of this are the application
of InI in object recognition, the mapping of 3D polarization distributions, or the elimination of occluding signals.
One of the most interesting applications of integral imaging is the production of views focused at different depths of the
3D scene. This application is the natural result of the ability of InI to create focal stacks from a single input image. In
this contribution we present new algorithm for this optical slicing application, and show that it is possible the 3D reconstruction
with improved lateral resolution.
Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for
an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value,
3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position.
Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment
and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the
right technology to 3D viewing to audiences of more than one person.
Due to the advanced degree of development, InI technology could be ready for commercialization in the coming
years. This development is the result of a strong research effort performed along the past few years by many groups.
Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the
University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable
that some of these principles have been recognized and characterized by our group. Other contributions of our
research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of
field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the
production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.
We present an overview of a method using Independent Component Analysis (ICA) and 3D Integral Imaging (II)
technique to recognize 3D objects at different orientations. This method has been successfully applied to the recognition
and classification of 3D scenes.
Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since
the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two
kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University
of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing.
Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the
depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to
enlarge the viewing angle.
Integral imaging provides with three-dimensional (3D) images. This technique works perfectly with incoherent
light and does not need the use of any special glasses nor stabilization techniques. Here we present relay systems
for both acquire and display 3D images. Some other important challenges are revisited.
Integral imaging systems are imaging devices that provide 3D images of 3D objects. When integral imaging systems work in their standard configuration they provide reconstructed images that are pseudoscopic, distorted and with very poor depth of field. Along the last four years our group has been working in the search of solutions for these drawbacks. Here we present hybrid technique which by means of optical method and digital processing allows the reconstruction of orthoscopic, undistorted, long-focal-depth integral images. Simulated and real imaging experiments are presented to support our proposal.
Integral imaging systems are imaging devices that provide 3D images of 3D objects. When integral imaging systems work in their standard configuration the provided reconstructed images are pseudoscopic; that is, are reversed in depth. In this paper we present, a technique for formation of real, undistorted, orthoscopic integral images by direct pickup. The technique is based on the use of a proper relay system and a global mapping of pixels of the elemental-images set. Simulated imaging experiments are presented to support our proposal.
Integral imaging systems are imaging devices that provide 3D images of 3D objects. When integral imaging systems work in their standard configuration the provided reconstructed images are pseudoscopic; that is, are reversed in depth. In this paper we present a technique for formation of real, undistorted, orthoscopic integral images by direct pickup. The technique is based on a global mapping of pixels of an elemental-images set. Simulated imaging experiments are presented.
One of the main drawbacks on integral imaging systems is their limited depth of field. With the current state of sensor technology such limitation is imposed by the pixilated structure of cell detectors. So, depth of field only can be optimized by proper selection of system parameters. However, nowadays sensor technology experiments a fast development. As a result of it, it is sure that in a close future the number of pixels per elemental image will be high enough to not influence the system resolution. In this not-too-far context, new ideas should be applied to improve the depth of field of integral imaging systems. Here we propose a new method to significantly extend the depth of field. The technique is based on the combined benefits of a proper amplitude modulation of the microlenses, and the application of deconvolution tools.
One of the main challenges in integral imaging is to overcome the limited depth of field. Although it is widely assumed that such limitation is mainly imposed by diffraction due to lenslet imaging, we show that the most restricting factor is the pixelated structure of the sensor (CCD). In this context, we take profit from these sensor constraints and demonstrate that by proper binary amplitude modulation of the pickup microlenses, the depth of field can be substantially improved with no deterioration of lateral resolution.