Digital holography is a well-known method to perform three-dimensional imaging by recording the light wavefront information originating from the object. Not only the intensity, but also the phase distribution of the wavefront can then be computed from the recorded hologram in the numerical reconstruction process. However, the reconstructions via the traditional methods suffer from various artifacts caused by twin-image, zero-order term, and noise from image sensors. Here we demonstrate that an end-to-end deep neural network (DNN) can learn to perform both intensity and phase recovery directly from an intensity-only hologram. We experimentally show that the artifacts can be effectively suppressed. Meanwhile, our network doesn’t need any preprocessing for initialization, and is comparably fast to train and test, in comparison with the recently published learning-based method. In addition, we validate that the performance improvement can be achieved by introducing a prior on sparsity.
In this paper, we present a new design for lightfield acquisition. In comparison with the conventional lightfield acquisition
techniques, the key characteristic of our system is its ability to achieve a higher resolution lightfield given a fixed sensor. In
particular, the system architecture employs two attenuation masks respectively positioned at the aperture stop and the optical
path of the camera, so that the four-dimensional (4D) lightfield spectrum is encoded and sampled by a two-dimensional
(2D) camera sensor in a single snapshot. In post-processing, by exploiting the coherence embedded in a lightfield, we are
able to retrieve the desired 4D lightfield of a higher resolution using inverse imaging. We demonstrate the performance of
our proposed method with simulations based on the actual lightfield dataset.