3-D optical measurements based on the structured light method have been applied in many fields over the past twenty years. Meanwhile, a variety of fringe patterns having the advantage of the imaging efficiency have been investigated.1, 2, 3, 4 Although the spatial resolution and scanning speed can be improved by increasing the density of projective lines, an ambiguity problem arises from stripe identification on complex surfaces.2 Some coding methods1, 3, 4 have been developed in order to distinguish two adjacent stripe tracks. Although Dipanda and Woo5 provide an efficient correspondence method in 3-D shape reconstruction based on a grid of 361 spots, the illumination pattern in their system is invariable and custom-made. Also, little attention has been paid to optimizing the space among adjacent light planes in 3-D measurements based on fringe projection.
This work employs spatial geometry to overcome the ambiguity problem in fringe-pattern projection. We compute the corresponding space between the adjacent light planes without identification difficulty within a certain measurement depth. We combine our method and the line-shifting idea.4 As a result, the flexible density of parallel-shifting lines is immediately projected into practical high-resolution measurements without the conventional coding procedure.
Our system consists of a LCD projector and two CCD cameras. Figure 1 shows two adjacent light planes and emits from the point , the lens center of the projector. The initially defined depth is from the virtual farthest position to the virtual nearest position . The point is the intersection point of and in the OXZ plane. The point is the intersection point of and in the OXZ plane. Then, and are linked by the reflected line that just passes the lens center of the left camera so that and cannot be distinguished. However, rotates to when the front measurement position moves from to through a distance . Note that the stripes on the target surface illuminated by and can be distinguished within the bounds and .
We can obtain the depth by means of the following conversion formula:is the coordinate of , is the position of the imaged spot on the image plane, is the deflection angle of the optical axis of the camera, is the deflection angle of , and is the focal length of the camera. Thus, the -axis resolution that is due to the pixel size can be obtained by calculating the partial derivative of Eq. 1 with respect to :
Given the width of the stripes and the configuration of the two cameras, the condition for unambiquity involving the value of can be expressed as follows:is the maximum pixel number occupied by on the two image planes, is the expected minimum interval between pixels of two adjacent stripes on the image planes, and is the worst -axis resolution of computed by Eq. 2. Thus, the maximum measurement depth of the object is required to be smaller than .
Principle of Optimal Layout
Figure 2 shows that there are light planes emitted from in our system. Here is adjusted to the position of the central light plane that is parallel to the OYZ plane. The optimal width between and on is expressed by . Under the condition of unambiguous corresponding stripes on the image planes, the consecutive optimal widths at the right side of can be derived asis the coordinate value of , and is the distance between and the OXY plane. We can obtain a similar rule for the consecutive widths at the left side of : is the displacement between and , which is the lens center of the right camera, and is the displacement between and .
Note that the consecutive widths on on the two sides of are equal when the two cameras are designed in a symmetrical layout and is located on the axis. Namely,
Within the distance , the variable range of the projective stripe corresponding to is from to on the left image plane, but is from to on the right image plane. The initial position and the end position on the image planes corresponding to can be obtained when two cameras take pictures of a planar board illuminated by a fringe projection based on the proposed method at .
The value of is nearly constant in practical application. The value of varies due to the maximum measurement depths of different objects, so that a set of flexible widths must be calculated within the performance range of the projector. Moreover, we design different patterns of parallel-shifting lines with flexible widths, which are aimed at different specifications of scanned objects. These labeled stripes under the first projection are linked to a series of lines. Then, the end points of these lines are detected and linked in consecutive order. Moreover, the two image planes are again divided into different corresponding divisions. Each light stripe under the following projections falls in the corresponding division.
The experimental procedure consisted of a calibration stage and a 3-D measurement stage. In the calibration stage, each parameter of the different sensors, including every light plane, was calibrated. In our system, was designed at the position , and the calibration values of , , , and were 358.92, 360.21, 2.17, and , respectively. The configuration parameters of our projector, such as the focal length, the pixel size, and the resolution of the LCD screen, were known in advance. We chose Eq. 6 as the scheme of multiple light planes in our system. Moreover, the different multistripe projection patterns corresponding to different measurement depths, as well as the line-shifting method, were designed and labeled for accurate 3-D reconstruction.
The effect of the proposed method was observed at the 3-D measurement stage. A picture of an aluminum workpiece, which consisted of a wedge part and a sidestep part, was taken by the left camera, as shown in Fig. 3a. The wedge part was at the top and at the base, with a 30-mm depth. The sidestep part was with a 15-mm depth. The interval width between two adjacent light planes on was . Ten projections were made, with the same illumination time of . The 3-D shape reconstruction of the workpiece is shown in Fig. 3b. In the next experiment, the first author’s face, as measured, was roughly in width, in length, and in depth. Figure 3c and 3d are the pictures taken by the left and right cameras in the video capture mode at . The total scanning time was with twelve projections, while the interval width between two adjacent light planes on was . Figure 3e shows the rendered surface of the merged 3-D result from the two cameras, obtained using the Visualization ToolKit (VTK). The respective measurement errors along axis, axis, and axis were approximately , , and , with the current system calibration. The maximum relative error between the real value and the measured value of the workpiece using the proposed method and the previous method4 were 0.68% and 0.87%, respectively. Since these stripes could not be overlapped on the image planes, the image processing in our experiments was easy. The 3-D results of the first projection were very robust due to the reliable image divisions. The proposed method was insensitive to albedo and depth variation.
The layout of the projective light planes in the dual-view multistripe measurement system is optimized in this letter. Each appropriate space of adjacent light planes has been calculated from the known configuration parameters and the restricted measurement depth. Accordingly, these light planes have been assigned to corresponding regions of the image planes. The experiments on high-resolution 3-D shape reconstruction of real objects show the effectiveness of the proposed method. This technique promises to be a valuable tool for real-time or all-field 3-D shape reconstruction.
We wish to acknowledge the support of the National Natural Science Foundation of China (30470488).