1 May 2008 Optimal layout of fringe projection for three-dimensional measurements
Author Affiliations +
Abstract
We optimize the layout of each light plane in a dual-view multistripe measurement system by employing spatial geometry analysis to avoid ambiguity. The imaging regions of every light plane can be labeled uniquely on two image planes within a certain measurement depth. Moreover, the flexible density of fringe patterns corresponding to the different measurement depths is immediately projected without the conventional coding procedure. Some experiments verify the effectiveness of the proposed method applied in high-resolution 3-D measurements.
Cheng, Yang, Hui, and Chen: Optimal layout of fringe projection for three-dimensional measurements

1.

Introduction

3-D optical measurements based on the structured light method have been applied in many fields over the past twenty years. Meanwhile, a variety of fringe patterns having the advantage of the imaging efficiency have been investigated.1, 2, 3, 4 Although the spatial resolution and scanning speed can be improved by increasing the density of projective lines, an ambiguity problem arises from stripe identification on complex surfaces.2 Some coding methods1, 3, 4 have been developed in order to distinguish two adjacent stripe tracks. Although Dipanda and Woo5 provide an efficient correspondence method in 3-D shape reconstruction based on a grid of 361 spots, the illumination pattern in their system is invariable and custom-made. Also, little attention has been paid to optimizing the space among adjacent light planes in 3-D measurements based on fringe projection.

This work employs spatial geometry to overcome the ambiguity problem in fringe-pattern projection. We compute the corresponding space between the adjacent light planes without identification difficulty within a certain measurement depth. We combine our method and the line-shifting idea.4 As a result, the flexible density of parallel-shifting lines is immediately projected into practical high-resolution measurements without the conventional coding procedure.

2.

Unambiguity Condition

Our system consists of a LCD projector and two CCD cameras. Figure 1 shows two adjacent light planes lm and lm1 emits from the point A , the lens center of the projector. The initially defined depth D is from the virtual farthest position DF to the virtual nearest position DN . The point df is the intersection point of lm and DF in the OXZ plane. The point dn is the intersection point of lm1 and DN in the OXZ plane. Then, df and dn are linked by the reflected line kn that just passes the lens center O of the left camera so that df and dn cannot be distinguished. However, kn rotates to knl when the front measurement position moves from DN to DN through a distance δ . Note that the stripes on the target surface illuminated by lm and lm1 can be distinguished within the bounds DF and DN .

Fig. 1

Geometry of the unambiguous condition.

050503_1_1.jpg

We can obtain the depth Z by means of the following conversion formula:

1

Z=Ltan[αtan1(pf)]tanθ,
where L is the X coordinate of A , p is the position of the imaged spot on the image plane, α is the deflection angle of the optical axis of the camera, θ is the deflection angle of lm , and f is the focal length of the camera. Thus, the Z -axis resolution ΔZ that is due to the pixel size Δp can be obtained by calculating the partial derivative of Eq. 1 with respect to p :

2

ΔZ=Lfsec2[αtan1(pf)]{tan[αtan1(pf)]tanθ}2(p2+f2)Δp.

Given the width of the stripes and the configuration of the two cameras, the condition for unambiquity involving the value of δ can be expressed as follows:

3

δ> (w+m)ΔZDF,
where w is the maximum pixel number occupied by df on the two image planes, m is the expected minimum interval between pixels of two adjacent stripes on the image planes, and ΔZDF is the worst Z -axis resolution of df computed by Eq. 2. Thus, the maximum measurement depth of the object is required to be smaller than Dδ .

3.

Principle of Optimal Layout

Figure 2 shows that there are 2n+1 light planes emitted from A in our system. Here lm is adjusted to the position of the central light plane that is parallel to the OYZ plane. The optimal width between lm+i1 and lm+i on DF is expressed by Lm+i(nin) . Under the condition of unambiguous corresponding stripes on the image planes, the consecutive optimal widths Lm+i(1in) at the right side of lm can be derived as

4

Lm+i=LD(H+HD)i1Hi1(HD)i(H+H)i1,
where H is the Z coordinate value of A , and H is the distance between DF and the OXY plane. We can obtain a similar rule for the consecutive widths Lmi(1in) at the left side of lm :

5

Lmi=RD(H+HD)i1(H+HH)i1(H+HHD)i(H+H)i1,
where R is the X displacement between A and O , which is the lens center of the right camera, and H is the Z displacement between A and O .

Fig. 2

Optimal layout and corresponding region procedure based on the dual-view method.

050503_1_2.jpg

Note that the consecutive widths Lm+i(nin) on DF on the two sides of lm are equal when the two cameras are designed in a symmetrical layout and A is located on the X axis. Namely,

6

Lm+i=LDHD.

Within the distance D , the variable range of the projective stripe corresponding to lm+i(nin) is from li to li+1 on the left image plane, but is from ri1 to ri on the right image plane. The initial position and the end position on the image planes corresponding to lm+i(nin) can be obtained when two cameras take pictures of a planar board illuminated by a fringe projection based on the proposed method at DF .

The value of δ is nearly constant in practical application. The value of D varies due to the maximum measurement depths of different objects, so that a set of flexible widths Lm+i(nin) must be calculated within the performance range of the projector. Moreover, we design different patterns of parallel-shifting lines with flexible widths, which are aimed at different specifications of scanned objects. These labeled stripes under the first projection are linked to a series of lines. Then, the end points of these lines are detected and linked in consecutive order. Moreover, the two image planes are again divided into different corresponding divisions. Each light stripe under the following projections falls in the corresponding division.

4.

Experiments

The experimental procedure consisted of a calibration stage and a 3-D measurement stage. In the calibration stage, each parameter of the different sensors, including every light plane, was calibrated. In our system, H was designed at the position 1800mm , and the calibration values of L , R , H , and H were 358.92, 360.21, 2.17, and 1.32mm , respectively. The configuration parameters of our projector, such as the focal length, the pixel size, and the resolution of the LCD screen, were known in advance. We chose Eq. 6 as the scheme of multiple light planes in our system. Moreover, the different multistripe projection patterns corresponding to different measurement depths, as well as the line-shifting method, were designed and labeled for accurate 3-D reconstruction.

The effect of the proposed method was observed at the 3-D measurement stage. A picture of an aluminum workpiece, which consisted of a wedge part and a sidestep part, was taken by the left camera, as shown in Fig. 3a. The wedge part was 80×80mm2 at the top and 90×90mm2 at the base, with a 30-mm depth. The sidestep part was 125×125mm2 with a 15-mm depth. The interval width between two adjacent light planes on DF was 13.5mm . Ten projections were made, with the same illumination time of 80ms . The 3-D shape reconstruction of the workpiece is shown in Fig. 3b. In the next experiment, the first author’s face, as measured, was roughly 150mm in width, 200mm in length, and 65mm in depth. Figure 3c and 3d are the pictures taken by the left and right cameras in the video capture mode at 25frames . The total scanning time was 960ms with twelve projections, while the interval width between two adjacent light planes on DF was 17mm . Figure 3e shows the rendered surface of the merged 3-D result from the two cameras, obtained using the Visualization ToolKit (VTK). The respective measurement errors along X axis, Y axis, and Z axis were approximately ±0.4 , ±0.4 , and ±0.6mm , with the current system calibration. The maximum relative error between the real value and the measured value of the workpiece using the proposed method and the previous method4 were 0.68% and 0.87%, respectively. Since these stripes could not be overlapped on the image planes, the image processing in our experiments was easy. The 3-D results of the first projection were very robust due to the reliable image divisions. The proposed method was insensitive to albedo and depth variation.

Fig. 3

(a) Workpiece with the projection of the fringe pattern in the left image plane. (b) 3-D shape reconstruction of the scanned workpiece. (c) Face with the projection of the fringe pattern on the left image plane. (d) Face with the projection of the fringe pattern on the right image plane. (e) Rendered surface of the merged 3-D facial data.

050503_1_3.jpg

5.

Conclusions

The layout of the projective light planes in the dual-view multistripe measurement system is optimized in this letter. Each appropriate space of adjacent light planes has been calculated from the known configuration parameters and the restricted measurement depth. Accordingly, these light planes have been assigned to corresponding regions of the image planes. The experiments on high-resolution 3-D shape reconstruction of real objects show the effectiveness of the proposed method. This technique promises to be a valuable tool for real-time or all-field 3-D shape reconstruction.

Acknowledgment

We wish to acknowledge the support of the National Natural Science Foundation of China (30470488).

References

1.  F. Blais, Review of 20 years of range sensor development, J. Electron. Imaging1017-9909 10.1117/1.1631921 13(1), 231–243 (2004). Google Scholar

2.  M. Chang, W. C. Chang, and K. H. Lin, High speed three-dimensional profilometry utilizing laser diode arrays, Opt. Eng.0091-3286 10.1117/1.1621408 42(12), 3595–3599 (2003). Google Scholar

3.  J. Batlle, E. Mouaddib, and J. Salvi, Recent progress in coded structured light as a technique to solve the correspondence problem: a survey, Pattern Recogn.0031-3203 10.1016/S0031-3203(97)00074-5 31(7), 963–982 (1998). Google Scholar

4.  J. Gühring, Dense 3-D surface acquisition by structured light using off-the-shelf components, Proc. SPIE0277-786X 4309, 220–231 (2001). Google Scholar

5.  A. Dipanda and S. Woo, Efficient correspondence problem-solving in 3-D shape reconstruction using a structured light system, Opt. Eng.0091-3286 10.1117/1.2055102 44(9), 093602 (2005). Google Scholar

Victor S. Cheng, Rongqiang Yang, Chun Hui, Yazhu Chen, "Optimal layout of fringe projection for three-dimensional measurements," Optical Engineering 47(5), 050503 (1 May 2008). https://doi.org/10.1117/1.2931577
JOURNAL ARTICLE
3 PAGES


SHARE
Back to Top