Translator Disclaimer
6 September 2019 Per-pixel calibration using CALTag and dense 3D point cloud reconstruction
Author Affiliations +
This paper proposes a multimodal imaging system that allows reconstructing a dense 3D spectral point cloud. The system consists of an Intel RealSense D415 depth camera that includes active infrared stereo and a NuruGo Smart Ultraviolet (UV) camera. RGB and Near Infrared (NIR) images are obtained from the first camera and UV from the second one. The novelty of this work is in the application of a perpixel calibration method using CALTag (High Precision Fiducial Markers for Camera Calibration) that outperforms traditional cameras calibration, which is based on a pinhole-camera model and a checker pattern. The new method eliminates both lens distortions and depth distortion with simple calculations on a Graphics Processing Unit (GPU), using a rail calibration system. To this end, the undistorted 3D world coordinates for every single pixel are generated using only six parameters and three linear equations. The traditional pinhole camera model is substituted by two polynomial mapping models. One handles lens distortions and the other one handles the depth distortions. The use of CALTag instead of traditional checkerboards allows overcoming failures during calibration due to clipping or occlusion of the calibration pattern. Multiple point clouds from different points of view of an object are registered using iterative closest point (ICP) algorithm. Finally, a deep neural network for point set upsampling is used as part of the post-processing to generate a dense 3D point cloud.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Karelia Pena-Pena, Xiao Ma, Daniel L. Lau, and Gonzalo R. Arce "Per-pixel calibration using CALTag and dense 3D point cloud reconstruction", Proc. SPIE 11137, Applications of Digital Image Processing XLII, 111371C (6 September 2019);

Back to Top