## 1.

## Introduction

3-D optical measurements based on the structured light method have been applied in many fields over the past twenty years. Meanwhile, a variety of fringe patterns having the advantage of the imaging efficiency have been investigated.^{1, 2, 3, 4} Although the spatial resolution and scanning speed can be improved by increasing the density of projective lines, an ambiguity problem arises from stripe identification on complex surfaces.^{2} Some coding methods^{1, 3, 4} have been developed in order to distinguish two adjacent stripe tracks. Although Dipanda and Woo^{5} provide an efficient correspondence method in 3-D shape reconstruction based on a grid of 361 spots, the illumination pattern in their system is invariable and custom-made. Also, little attention has been paid to optimizing the space among adjacent light planes in 3-D measurements based on fringe projection.

This work employs spatial geometry to overcome the ambiguity problem in fringe-pattern projection. We compute the corresponding space between the adjacent light planes without identification difficulty within a certain measurement depth. We combine our method and the line-shifting idea.^{4} As a result, the flexible density of parallel-shifting lines is immediately projected into practical high-resolution measurements without the conventional coding procedure.

## 2.

## Unambiguity Condition

Our system consists of a LCD projector and two CCD cameras. Figure 1 shows two adjacent light planes
${l}_{m}$
and
${l}_{m-1}$
emits from the point
$A$
, the lens center of the projector. The initially defined depth
$D$
is from the virtual farthest position
${D}_{F}$
to the virtual nearest position
${D}_{N}$
. The point
${d}_{f}$
is the intersection point of
${l}_{m}$
and
${D}_{F}$
in the *OXZ* plane. The point
${d}_{n}$
is the intersection point of
${l}_{m-1}$
and
${D}_{N}$
in the *OXZ* plane. Then,
${d}_{f}$
and
${d}_{n}$
are linked by the reflected line
${k}_{n}$
that just passes the lens center
$O$
of the left camera so that
${d}_{f}$
and
${d}_{n}$
cannot be distinguished. However,
${k}_{n}$
rotates to
${k}_{nl}$
when the front measurement position moves from
${D}_{N}$
to
${D}_{N}\prime $
through a distance
$\delta $
. Note that the stripes on the target surface illuminated by
${l}_{m}$
and
${l}_{m-1}$
can be distinguished within the bounds
${D}_{F}$
and
${D}_{N}\prime $
.

We can obtain the depth $Z$ by means of the following conversion formula:

## Eq. 1

$$Z=\frac{L}{\mathrm{tan}[\alpha -{\mathrm{tan}}^{-1}(p\u2215f)]-\mathrm{tan}\phantom{\rule{0.2em}{0ex}}\theta},$$## Eq. 2

$$\mathrm{\Delta}Z=\frac{Lf\phantom{\rule{0.2em}{0ex}}{\mathrm{sec}}^{2}[\alpha -{\mathrm{tan}}^{-1}(p\u2215f)]}{{\{\mathrm{tan}[\alpha -{\mathrm{tan}}^{-1}(p\u2215f)]-\mathrm{tan}\phantom{\rule{0.2em}{0ex}}\theta \}}^{2}({p}^{2}+{f}^{2})}\cdot \mathrm{\Delta}p.$$Given the width of the stripes and the configuration of the two cameras, the condition for unambiquity involving the value of $\delta $ can be expressed as follows:

where $w$ is the maximum pixel number occupied by ${d}_{f}$ on the two image planes, $m$ is the expected minimum interval between pixels of two adjacent stripes on the image planes, and $\mathrm{\Delta}{Z}_{{D}_{F}}$ is the worst $Z$ -axis resolution of ${d}_{f}$ computed by Eq. 2. Thus, the maximum measurement depth of the object is required to be smaller than $D-\delta $ .## 3.

## Principle of Optimal Layout

Figure 2 shows that there are
$2n+1$
light planes emitted from
$A$
in our system. Here
${l}_{m}$
is adjusted to the position of the central light plane that is parallel to the *OYZ* plane. The optimal width between
${l}_{m+i-1}$
and
${l}_{m+i}$
on
${D}_{F}$
is expressed by
${L}_{m+i}\phantom{\rule{0.3em}{0ex}}(-n\u2a7di\u2a7dn)$
. Under the condition of unambiguous corresponding stripes on the image planes, the consecutive optimal widths
${L}_{m+i}\phantom{\rule{0.3em}{0ex}}(1\u2a7di\u2a7dn)$
at the right side of
${l}_{m}$
can be derived as

## Eq. 4

$${L}_{m+i}=\frac{LD{(H+{H}^{\prime}-D)}^{i-1}{H}^{i-1}}{{(H-D)}^{i}{(H+{H}^{\prime})}^{i-1}},$$*OXY*plane. We can obtain a similar rule for the consecutive widths ${L}_{m-i}\phantom{\rule{0.3em}{0ex}}(1\u2a7di\u2a7dn)$ at the left side of ${l}_{m}$ :

## Eq. 5

$${L}_{m-i}=\frac{RD{(H+{H}^{\prime}-D)}^{i-1}{(H+{H}^{\prime}-{H}^{\u2033})}^{i-1}}{{(H+{H}^{\prime}-{H}^{\u2033}-D)}^{i}{(H+{H}^{\prime})}^{i-1}},$$Note that the consecutive widths ${L}_{m+i}\phantom{\rule{0.3em}{0ex}}(-n\u2a7di\u2a7dn)$ on ${D}_{F}$ on the two sides of ${l}_{m}$ are equal when the two cameras are designed in a symmetrical layout and $A$ is located on the $X$ axis. Namely,

Within the distance $D$ , the variable range of the projective stripe corresponding to ${l}_{m+i}\phantom{\rule{0.3em}{0ex}}(-n\u2a7di\u2a7dn)$ is from ${l}_{i}$ to ${l}_{i+1}$ on the left image plane, but is from ${r}_{i-1}$ to ${r}_{i}$ on the right image plane. The initial position and the end position on the image planes corresponding to ${l}_{m+i}\phantom{\rule{0.3em}{0ex}}(-n\u2a7di\u2a7dn)$ can be obtained when two cameras take pictures of a planar board illuminated by a fringe projection based on the proposed method at ${D}_{F}$ .

The value of $\delta $ is nearly constant in practical application. The value of $D$ varies due to the maximum measurement depths of different objects, so that a set of flexible widths ${L}_{m+i}\phantom{\rule{0.3em}{0ex}}(-n\u2a7di\u2a7dn)$ must be calculated within the performance range of the projector. Moreover, we design different patterns of parallel-shifting lines with flexible widths, which are aimed at different specifications of scanned objects. These labeled stripes under the first projection are linked to a series of lines. Then, the end points of these lines are detected and linked in consecutive order. Moreover, the two image planes are again divided into different corresponding divisions. Each light stripe under the following projections falls in the corresponding division.

## 4.

## Experiments

The experimental procedure consisted of a calibration stage and a 3-D measurement stage. In the calibration stage, each parameter of the different sensors, including every light plane, was calibrated. In our system, $H$ was designed at the position $1800\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$ , and the calibration values of $L$ , $R$ , ${H}^{\prime}$ , and ${H}^{\u2033}$ were 358.92, 360.21, 2.17, and $1.32\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$ , respectively. The configuration parameters of our projector, such as the focal length, the pixel size, and the resolution of the LCD screen, were known in advance. We chose Eq. 6 as the scheme of multiple light planes in our system. Moreover, the different multistripe projection patterns corresponding to different measurement depths, as well as the line-shifting method, were designed and labeled for accurate 3-D reconstruction.

The effect of the proposed method was observed at the 3-D measurement stage. A picture of an aluminum workpiece, which consisted of a wedge part and a sidestep part, was taken by the left camera, as shown in Fig. 3a. The wedge part was
$80\times 80\phantom{\rule{0.3em}{0ex}}{\mathrm{mm}}^{2}$
at the top and
$90\times 90\phantom{\rule{0.3em}{0ex}}{\mathrm{mm}}^{2}$
at the base, with a 30-mm depth. The sidestep part was
$125\times 125\phantom{\rule{0.3em}{0ex}}{\mathrm{mm}}^{2}$
with a 15-mm depth. The interval width between two adjacent light planes on
${D}_{F}$
was
$13.5\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$
. Ten projections were made, with the same illumination time of
$80\phantom{\rule{0.3em}{0ex}}\mathrm{ms}$
. The 3-D shape reconstruction of the workpiece is shown in Fig. 3b. In the next experiment, the first author’s face, as measured, was roughly
$150\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$
in width,
$200\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$
in length, and
$65\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$
in depth. Figure 3c and 3d are the pictures taken by the left and right cameras in the video capture mode at
$25\phantom{\rule{0.3em}{0ex}}\mathrm{frame}\u2215\mathrm{s}$
. The total scanning time was
$960\phantom{\rule{0.3em}{0ex}}\mathrm{ms}$
with twelve projections, while the interval width between two adjacent light planes on
${D}_{F}$
was
$17\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$
. Figure 3e shows the rendered surface of the merged 3-D result from the two cameras, obtained using the Visualization ToolKit (VTK). The respective measurement errors along
$X$
axis,
$Y$
axis, and
$Z$
axis were approximately
$\pm 0.4$
,
$\pm 0.4$
, and
$\pm 0.6\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$
, with the current system calibration. The maximum relative error between the real value and the measured value of the workpiece using the proposed method and the previous method^{4} were 0.68% and 0.87%, respectively. Since these stripes could not be overlapped on the image planes, the image processing in our experiments was easy. The 3-D results of the first projection were very robust due to the reliable image divisions. The proposed method was insensitive to albedo and depth variation.

## 5.

## Conclusions

The layout of the projective light planes in the dual-view multistripe measurement system is optimized in this letter. Each appropriate space of adjacent light planes has been calculated from the known configuration parameters and the restricted measurement depth. Accordingly, these light planes have been assigned to corresponding regions of the image planes. The experiments on high-resolution 3-D shape reconstruction of real objects show the effectiveness of the proposed method. This technique promises to be a valuable tool for real-time or all-field 3-D shape reconstruction.

## Acknowledgment

We wish to acknowledge the support of the National Natural Science Foundation of China (30470488).