This paper proposes a multimodal imaging system that allows reconstructing a dense 3D spectral point cloud. The system consists of an Intel RealSense D415 depth camera that includes active infrared stereo and a NuruGo Smart Ultraviolet (UV) camera. RGB and Near Infrared (NIR) images are obtained from the first camera and UV from the second one. The novelty of this work is in the application of a perpixel calibration method using CALTag (High Precision Fiducial Markers for Camera Calibration) that outperforms traditional cameras calibration, which is based on a pinhole-camera model and a checker pattern. The new method eliminates both lens distortions and depth distortion with simple calculations on a Graphics Processing Unit (GPU), using a rail calibration system. To this end, the undistorted 3D world coordinates for every single pixel are generated using only six parameters and three linear equations. The traditional pinhole camera model is substituted by two polynomial mapping models. One handles lens distortions and the other one handles the depth distortions. The use of CALTag instead of traditional checkerboards allows overcoming failures during calibration due to clipping or occlusion of the calibration pattern. Multiple point clouds from different points of view of an object are registered using iterative closest point (ICP) algorithm. Finally, a deep neural network for point set upsampling is used as part of the post-processing to generate a dense 3D point cloud.
A high frame-rate compressive spectral video system is developed. The 4-dimensional targets (space, time and spectra) are captured by multi-spectral LED modulated illumination. The reflection of targets passing through an objective lens is modulated by a digital micro-mirror device (DMD) in the spatial domain, which is then collected by a RGB sensor through an imaging lens. The rapidly changing LED illumination patterns and DMD codings provide unique modulations for each temporal frame. Mathematical modeling and simulation are shown all as well as experimental results.
Coded aperture snapshot spectral imager (CASSI) uses focal plane array (FPA) to capture three dimensional (3D) spectral scene by single or a few two-dimensional (2D) snapshots. Current CASSI systems use a set of fixed coded apertures to modulate the spatio-spectral data cube before the compressive measurement. This paper proposes an adaptive projection method to improve the compressive efficiency of the CASSI system by adaptively designing the coded aperture according to a-priori knowledge of the scene. The adaptive coded apertures are constructed from the nonlinear thresholding of the grey-scale map of the scene, which is captured by an aided RGB camera. Then, the 3D encoded spectral scene is projected onto the 2D FPAs. Based on the sparsity assumption, the spectral images can be reconstructed by the compressive sensing algorithm using the FPA measurements. This paper studies and verifies the proposed adaptive coded aperture method on a spatial super-resolution CASSI system, where the resolution of the coded aperture is higher than that of the FPAs. It is shown that the adaptive coded apertures provide superior reconstruction performance of the spectral images over the random coded apertures.