Tensor display is an option in glasses-free three-dimensional (3-D) display technology. An initial solution has to be set to decompose the light-field information to be represented by the system. We have analyzed the impact of the initial guess on the multiplicative update rules in terms of peak signal-to-noise ratio, and proposed a method based on depth map estimation from an input light field. Results from simulations were obtained and compared with previous literature. In our sample, the initial values used have a large influence on results and convergence to a local minimum. The quality of the output stabilizes after a certain number of iterations, suggesting that a limit on such numbers should be imposed. We show that the proposed methods outperform the pre-existing ones.
The discrete Radon transform, DRT, calculates, with linearithmic complexity, the sum of pixels through a set of discrete lines covering all possible slopes and intercepts in an image. In 2006, a method was proposed to compute the inverse DRT that remains exact and fast, in spite of being iterative. In this work the DRT pair is used to propose a Ridgelet and a Curvelet transform that perform focus measurement of an image. Then the shape from focus approach based on DRT pair is applied to a focal stack to create a depth map of a scene.
In this paper, we use information from the light field to obtain a distribution map of the wavefront phase. This distribution is associated with changes in refractive index which are relevant in the propagation of light through a heterogeneous or turbulent medium. Through the measurement of the wavefront phase from a single shot, it is possible to make the deconvolution of blurred images affected by the turbulence. If this deconvolution is applied to light fields obtained by plenoptic acquisition, the original optical resolution associated to the objective lens is restored, it means we are using a kind of superresolution technique that works properly even in the presence of turbulence. The wavefront phase can also be estimated from the defocused images associated to the light field: we present here preliminary results using this approach.
Modern astronomic telescopes take advantage of multi-conjugate adaptive optics, in which wavefront sensors play a key role. A single sensor capable of measuring wavefront phases at any angle of observation would be helpful when improving atmospheric tomographic reconstruction. A new sensor combining both geometric and plenoptic arrangements is proposed, and a simulation demonstrating its working principle is also shown. Results show that this sensor is feasible, and also that single extended objects can be used to perform tomography of atmospheric turbulence.
Plenoptic cameras have been developed the last years as a passive method for 3d scanning, allowing focal stack capture
from a single shot. But data recorded by this kind of sensors can also be used to extract the wavefront phases associated
to the atmospheric turbulence in an astronomical observation.
The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated to the
turbulence. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase
and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the
Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase
tomographically, taking advantage of the two principal characteristics of the plenoptic sensors at the same time: 3D
scanning and wavefront sensing. Then, the plenoptic sensors can be studied and used as an alternative wavefront sensor
for Adaptive Optics, particularly relevant when Extremely Large Telescopes projects are being undertaken.
In this paper, we will present the first observational wavefront phases extracted from real astronomical observations,
using punctual and extended objects, and we show that the restored wavefronts match the Kolmogorov atmospheric
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution
algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a
microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information
from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing
Units (GPUs) and Field Programmable Gates Arrays (FPGAs).
In this paper, we will present our own implementations related with the aforementioned aspects but also two new
developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS
plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA).
The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the
turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser
Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer
Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the
telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically.
These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate
the wave optics and computer vision fields, as many authors claim.
ELT laser guide star wavefront sensors are planned to handle an expected amount of data to be overwhelmingly large
(1600×1600 pixels at 700 fps). According to the calculations involved, the solutions must consider to run on specialized
hardware as Graphical Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs), among others.
In the case of a Shack-Hartmann wavefront sensor is finally selected, the wavefront slopes can be computed using
centroid or correlation algorithms. Most of the developments are designed using centroid algorithms, but precision ought
to be taken in account too, and then correlation algorithms are really competitive.
This paper presents an FPGA-based wavefront slope implementation, capable of handling the sensor output stream in a
massively parallel approach, using a correlation algorithm previously tested and compared to the centroid algorithm.
Time processing results are shown, and they demonstrate the ability of the FPGA integer arithmetic in the resolution of
The selected architecture is based in today's commercially available FPGAs which have a very limited amount of
internal memory. This limits the dimensions used in our implementation, but this also means that there is a lot of margin
to move real-time algorithms from the conventional processors to the future FPGAs, obtaining benefits from its
flexibility, speed and intrinsically parallel architecture.