Three-dimensional (3D) point cloud segmentation plays an important role in autonomous navigation systems, such as mobile robots and autonomous cars. However, the segmentation is challenging because of data sparsity, uneven sampling density, irregular format, and lack of color texture. In this paper, we propose a sparse 3D point cloud segmentation method based on 2D image feature extraction with deep learning. Firstly, we jointly calibrate the camera and lidar to get the external parameters (rotation matrix and translation vector). Then, we introduce the Convolutional Neural Network (CNN)-based object detectors to generate 2D object region proposals in the RGB image and classify object. Finally, based on the external parameters of joint calibration, we extract point clouds that can be projected to 2D object region from 16-lines RS-LIDAR-16 scanner, and further fine segmentation in the extracted point cloud according to prior knowledge of the classification features. Experiments demonstrate the effectiveness of the proposed sparse point cloud segmentation method.
Phase unwrapping technology plays an important role in phase measurement profilometry. The unwrapping results directly affect the measurement accuracy. With the development of deep learning theory, it is opening a new direction to phase unwrapping algorithm. In this paper, a new neural network model based on an improved generation adversarial network (iGAN) is proposed for phase unwrapping. Compared with traditional methods, it can effectively suppress the influence of noise such as shadows, and does not need any referenced grating information. In addition, it can realize the phase unwrapping with a single image. Specifically, the algorithm is verified by the three-dimensional reconstruction with structured light based on the simulation data. The results indicate that the proposed method can successfully unwrap the phase via a single image. It also can well suppress the influence of frequency and shadows.
Calibration which defines the relationship between the phase and depth data is the important part of the fringe projection
profilometry. In practice, the inherently nonlinear and spatially variable relationship between the absolute phase of the
projected fringe and the object surface depth without using telecentric lens make calibration problematic in the
measurement of small object. In order to obtain this problem, a flexible, simple telecentric three dimensional
measurement system is proposed. Because of the characteristic that the size of object will not change with depth in
telecentric imaging, the absolute phase is linear with the depth and the process of calibration become simpler. The
experiment result indicate that the standard deviation of calibration result at z coordinate is within 5 μm, while that at x
and y coordinate is within 3 μm. Three-dimensional shape reconstruction of a coin value ¥1 and measurement of central
circle points of the calibration target further verify the validity of the proposed calibration method.