A three-dimensional (3-D) measurement method for large-scale bending plates is presented. The proposed method, which combines the advantages of laser and a vision measurement method, makes use of a 3-D scanner, a texture projector, and a laser total station. The 3-D scanner is used to measure multiple partial sections of a large-scale bending plate, the texture projector is used to project a textured pattern onto the bending plate to perform alignment, and the total station is used to correct the aligned result to obtain more accurate 3-D data. The performance and effectiveness are evaluated by experiments.
This paper presents a three-dimensional (3-D) reconstruction method of outdoor scenes from an image sequence captured by a moving handheld video camera. The proposed method works in a coarse-to-fine manner and can be divided into three steps: First, an initial 3-D shape is reconstructed according to the image correspondences obtained by optic flow tracking. Second, an initial depth map is generated according to the initial 3-D shape. Finally, a global energy function for refining the initial depth is used to acquire better results. Experimental results are demonstrated for a variety of complex scenes.
This paper presents a method to measure dynamic deformable three-dimensional (3-D) surfaces from a calibrated stereo image sequence. This proposed method adopts a patch-based expansion technique that performs 3-D measuring and tracking by computing the motion and disparity independently. Compared with traditional expansion techniques, the proposed expansion technique always spreads from the most reliable region, which makes the measurement more accurate and dense. The performance is evaluated on simulated experiments, and the effectiveness is demonstrated by different real surfaces.
We propose a novel framework to simultaneously and accurately measure the three-dimensional (3-D) shape and 3-D motion of a dynamic deformable surface from a calibrated stereo image sequence. The framework mainly aims at the problems of error accumulation and local illumination change. It performs the measurement with a random gray triangle pattern marked on the object surface via the following steps: first, the triangles in all images are detected using a method we proposed; second, the matching triangles between the first, left, and right frame in the stereo image sequence are located by the proposed local triangle descriptor and an extension of the epipolar constraint to triangles; third, the spatiotemporal triangle correspondences in the subsequent frames are obtained by triangle tracking, and the tracking errors are detected and recovered by the local 3-D topology. The performance is evaluated in challenging simulated experiments, and the effectiveness is demonstrated by real surfaces. The experimental results show that the proposed framework is effective and robust to cope with error accumulation and the influence of local illumination change.