Active range imaging (RI) systems utilize actively controlled light sources emitting laser pulses that are subsequently recorded by an imaging system and used for depth profile estimation. Classical RI systems are limited by their need for a large number of frames required to obtain high resolution depth information. In this work, we propose an RI approach motivated by the recently proposed compressed sensing framework to dramatically reduce the number of necessary frames. Compressed gated range sensing employs a random gating mechanism along with state-of-the-art reconstruction algorithms for the estimation of the timing of the reflected pulses and the inference of distances. In addition to efficiency, the proposed scheme is also able to identify multiple reflected pulses that can be introduced by semi-transparent elements in the scene such as clouds, smoke, and foliage. Simulations under highly realistic conditions demonstrate that the proposed architecture is capable of accurately recovering the depth profile of a scene from as few as 10 frames at 100 depth bins resolution, even under very challenging conditions. The results further indicate that the proposed architecture is able to extract multiple reflected pulses with a minimal increase in the number of frames, in situations where state-of-the-art methods fail to accurately estimate the correct depth signals.
Laser gated-viewing advanced range imaging (LGVARI) methods sample range information in a wide range area with super-resolution from a few sampling frames. Three different methods are investigated: the coding of range-gates, the compressed sensing (CS) range imaging, and a hybrid-coding-CS method. In contrast to classical range imaging methods based on Nyquist sampling, the range information is not directly visible in the single images and has to be extracted from a complete sequence by means of computational optics. With LGVARI, it is possible to sample range information from only a few frames (i.e., images) with super-resolution far beyond the limit of the Nyquist sampling theorem. It is shown that the three methods have a compression rate of <9%.
In this paper, we introduce a method to stabilize the variance of decimated transforms using one
or two variance stabilizing transforms (VST). These VSTs are applied to the 3-D Meyer wavelet
pyramidal transform which is the core of the first generation 3D curvelets. This allows us to
extend these 3-D curvelets to handle Poisson noise, that we apply to the denoising of a simulated
The LOFAR Radio Telescope is a radio interferometer with multiple antennas placed throughout Europe. A radio interferometer samples the image of the sky in the Fourier domain; recovering the image from these samples is an inverse problem. In radio astronomy the CLEAN method has been used for many years to find a solution.
Recent papers have established a link between radio interferometry and compressed sensing, which supports sparse recovery methods to reconstruct an image from interferometric data. The goal of this paper is to study sparse recovery methods on LOFAR data by comparing the accuracy of CLEAN and compressed sensing when applied to simulated LOFAR observations.
Active Range Imaging (ARI) has recently sparked an enthusiastic interest due to the numerous applications that can benefit from the high quality depth maps that ARI systems offer. One of the most successful ARI techniques employs Time-of-Flight (ToF) cameras which emit and subsequently record laser pulses in order to estimate the distance between the camera and objects in a scene. A limitation of this type of ARI is the requirement for a large number of frames that have to be captured in order to generate high resolution depth maps. In this work, we introduce Compressed Gated Range Sensing (CGRS), a novel approach for ToF-based ARI that utilizes the recently proposed framework of Compressed Sensing (CS) to dramatically reduce the number of necessary frames. The CGRS technique employs a random gating function along with state-of-the-art reconstruction in order to estimate the timing of a returning laser pulse and infer the depth map. To validate our method, software simulations were carried out using a realistic system model. Simulated results suggest that low error reconstruction of a depth map is possible using approximately 20% of the frames that traditional ToF cameras require, while 30% sampling rates can achieve very high fidelity reconstruction.
Range Imaging (RI) has sparked an enthusiastic interest recently due to the numerous applications that can benefit from the presence 3D data. One of the most successful techniques for RI employs Time-of-Flight (ToF) cameras which emit and subsequently record laser pulses in order to estimate the distance between the camera and an object. A limitation of this class of RI is the requirement for a large number of frames that have to be captured in order to generate high resolution depth maps. In this work, we propose a novel approach for ToF based RI that utilizes the recently proposed framework of Compressed Sensing to dramatically reduce the number of necessary frames. Our technique employs a random gating function along with state-of-the-art minimization techniques in order to estimate the location of a returning laser pulse and infer the distance. To validate the theoretical motivation, software simulations were carried out. Our simulated results have shown that reconstruction of a depth map is possible from as low as 10% of the frames that traditional ToF cameras require with minimum reconstruction error while 20% sampling rates can achieve almost perfect reconstruction in low resolution regimes. Our experimental results have also shown that the proposed method is robust to various types of noise and applicable to realistic signal models.
During data acquisition, the loss of data is usual. It can be due to malfunctioning sensors of a CCD camera or
any other acquiring system, or because we can only observe a part of the system we want to analyze. This problem
has been addressed using diffusion through the use of partial differential equations in 2D and in 3D, and recently
using sparse representations in 2D in a process called inpainting which uses sparsity to get a solution (in the
masked/unknown part) which is statistically similar to the known data, in the sense of the transformations used,
so that one cannot tell the inpainted part from the real one. It can be applied on any kind of 3D data, whether
it is 3D spatial data, 2D and time (video) or 2D and wavelength (multi-spectral imaging). We present inpainting
results on 3D data using sparse representations. These representations may include the wavetet transforms, the
discrete cosine transform, and 3D curvelet transforms.