## 1.

## Introduction

The structured light system (SLS) has been an important contactless three-dimensional (3-D) measurement technology for its advantages of high accuracy and efficiency.^{1}2.3.^{–}^{4} A basic SLS consists of one projector and one camera. The projector is used to project some predefined pattern images onto the target surface, and the camera is used to capture the scene synchronously. By extracting the projected features from the captured images, accurate and dense correspondences can be established between the camera and projector reference frames. With the correspondences, 3-D information can be retrieved via the triangulation principle.^{5}^{,}^{6}

For the SLS, calibration of system parameters is usually the first and crucial step because the calibration result determines final 3-D measurement precision directly.^{7}8.^{–}^{9} To perform the triangulation, we need to know the intrinsic parameters of both projector and camera, as well as the extrinsic parameters between them. There have been several research works to address this classical problem.^{10}11.12.^{–}^{13} The main difficulty in the calibration of the projector-camera-based SLS is how to precisely calibrate the projector device. As the projector cannot “see” the calibration target like the camera, existing camera calibration methods cannot be applied directly. To calibrate the projector, a usual means is to treat the projector as an “inverse” camera. In the implementation, the camera is first calibrated, and then it is used to calibrate the projector device. However, with such a calibration strategy, calibration errors of the camera will propagate to the stage of projector calibration^{14} and thus decrease the overall calibration accuracy of the SLS. Moreover, minimization of two-dimensional (2-D) reprojection error of the reference points is a usual criterion for optimizing the calibration results, especially for the lens distortion parameters.^{15}^{,}^{16} Such an optimization procedure is usually applied to the camera and projector separately and cannot reflect real 3-D reconstruction accuracy.

In this paper, a 3D-based optimization method is studied to improve the calibration accuracy of the projector-camera-based SLS. The system is first calibrated by traditional means with a printed checkerboard pattern. Then, a planar surface with some precisely printed markers is used for the parameter optimization. Based on the reference plane, 3-D metric error criteria are defined as the planarity error, the distance error, and the angular error. A multiobjective optimization problem is established by considering all system parameters as variables. Using the primary calibration results as initial values, optimal calibration parameters with minimum 3-D measurement errors can be solved. In the experiments, the optimized calibration parameters are evaluated qualitatively and quantitatively. The results show that calibration accuracy can be greatly improved by the proposed approach compared with some classical calibration methods.

This paper is organized as follows: a brief review of existing calibration methods of the projector-camera-based SLS is presented in Sec. 2. In Sec. 3, the calibration procedure and the parameter optimization are introduced. Experimental results are provided and evaluated in Sec. 4. Finally, the conclusion is offered in Sec. 5.

## 2.

## Related Works

Camera calibration is a classical topic in the computer vision domain. The most widely used camera calibration methods are Tsai’s method^{17} and Zhang’s method.^{18} Tsai’s method uses a precise external 3-D calibration object to which a reference coordinate frame is defined. In Zhang’s method, the calibration object can be simplified to a planar surface with some printed patterns. Position and orientation of the calibration plane can be changed freely in the visual field of the camera. With adequate calibration images, the camera’s intrinsic and extrinsic parameters with respect to the calibration plane can be estimated. Such a calibration procedure can be applied to multiple camera-based stereo vision systems.^{19}

However, for the projector-camera-based SLS, both the camera and the projector are required to be accurately calibrated. Calibration of the camera can follow traditional means. The projector cannot see the calibration object, so the methods for camera calibration cannot be applied to it directly. In previous works, a popular approach is to use the camera calibration information to calibrate the projector device. The operation contains two steps: (1) the calibration plane with printed patterns is imaged by the camera and (2) the calibration plane is kept static while another pattern is projected onto the calibration plane and then imaged by the camera. By changing the position and pose of the calibration plane, a group of image pairs can be captured. The images with only printed patterns are used for the camera’s calibration to obtain its intrinsic (e.g., focal length, principle point, and lens distortions) and extrinsic (e.g., rotation and translation vectors with respect to the calibration plane) parameters. As a result, 3-D information of the calibration plane at each calibration position can be calculated with respect to the camera reference frame. Thus, 3-D coordinates of the projected pattern features can be calculated. As the image coordinates of the projected pattern features are known *a priori*, the intrinsic and extrinsic parameters of the projector can be estimated via traditional camera calibration procedures.

In Ref. 20, a printed checkerboard pattern was used for the calibration of a projector-camera-based SLS. The calibration plane contained two regions: one region with a printed pattern was used for the camera calibration and the other was blank and used as the projector screen. Four corners of the plane were marked with colors to release the feature detection difficulty. In Ref. 21, a planar calibration object with 140 uniformly distributed physical markers was used. These markers were precisely measured with a known distance, which were used to calibrate the camera first. Then, a series of sinusoidal phase-shifting patterns was projected on it and captured by the camera. With the phase decoding procedure, one-to-one correspondence can be established between the projector and the camera. By interpolating the image positions of the markers on the projector’s image plane, their projector coordinates can be calculated. A similar idea was also reported in Ref. 22, which extended it to a concept that allows the projector to be treated as if it can “capture” images. In this method, three sinusoidal phase-shifting fringe patterns were projected on the object sequentially and captured by the camera. To construct the one-to-one correspondence between the camera and projector, both vertical and horizontal fringe patterns were used. Thus, the calibration of the projector can be implemented on the regenerated projector images. As a continuous work, an improved calibration approach was introduced in Ref. 23 to deal with the projector defocus problem. The authors showed that one-to-one correspondence between the projector and camera cannot be established in spatial domain subject to the defocused pattern projection. However, the mapping in the phase domain was invariant between the central points of a projector pixel and a camera pixel. Without considering the nonlinear distortion of the projector, an improved calibration result was obtained via traditional calibration methods. In Ref. 24, a planar board with some evenly distributed circular markers was placed on a motion table and used to calibrate the projector-camera-based SLS. By defining the calibration board as the world coordinate system, 3-D coordinates of the circular makers can be precisely calculated and used for the calibration of the camera. To calibrate the projector, the gray code and phase-shifting patterns were also used. The sum of the reprojection errors of all the reference points onto the camera and projector image planes was used to optimize the calibration parameters of the camera and projector. In Ref. 25, dense correspondences between the projector and camera were first generated by gray code and phase-shifting patterns. Then, the intrinsic and extrinsic parameters of the projector and camera were estimated by decomposing a radial fundamental matrix, and the 2-D reprojection error was adopted for the parameter optimization.

In Ref. 26, a calibration method for the fringe projection profilometry system was studied. Unlike previous stereo vision calibration methods, the bundle adjustment strategy was introduced to the calibration procedure, which was used to adjust the coordinates of benchmarks. The results showed that side effect due to inaccuracy of benchmarks could be efficiently reduced, resulting in reliable calibration parameters. In Ref. 27, a nonlinear iterative optimization method was proposed to correct the errors caused by lens distortion. Simulated and experimental results showed that the calibration accuracy can be improved compared with the conventional linear model method. In Ref. 28, a residual error compensation scheme was proposed to improve the calibration accuracy. The compensation scheme was applied to a reference plane with the projection of some circular control points that projected from the projector. Planarity of the control points was used to rectify the remaining distortions that are not predicted by the projector lens distortion model. With such a feedback scheme, the systematic error and robustness could be improved. Instead of using projected features, a reference plane with some precisely printed markers was used for the rectification of primary calibration parameters.^{29} Based on this work, a more comprehensive framework for the optimization of the projector-camera-based SLS parameters will be investigated and evaluated in this paper.

In addition, there are also some projector-camera-based SLS calibration tools that are widely used in the research domain, such as the “Procam-calib” tool^{30}^{,}^{31} and the “SLS-calib” tool.^{32}^{,}^{33} For the Procam-calib tool, it first calibrates the camera via Zhang’s method. Then, a checkerboard pattern is projected on the calibration board, and the corners of the projected pattern can be detected. By applying the ray-plane intersection method, the 3-D position for each projected corner can be calculated and used for the calibration of the projector. For the SLS-calib tool, its improvement is to use the local homographies to individually translate each checkerboard corner from the camera plane to the projector plane. Each local homography is only valid within its neighborhood, and it is used to translate only one corner point. In this way, all pattern corner points can be transferred from camera to projector plane independently of each other, thus decreasing the effects by lens distortions. The experimental results showed that local homographies can successfully handle projector lens distortion and improve the overall calibration accuracy.

## 3.

## Calibration and Optimization of Projector-Camera-Based Structured Light System Parameters

The proposed calibration approach for the projector-camera-based SLS consists of two steps: primary calibration and the parameter optimization in 3-D space. To model the system, a full pinhole model that contains radial and tangential lens distortions is adopted for both the camera and projector. The calibration procedures described in Refs. 3031.32.–33 can be used for the primary calibration of the system. Then, a reference plane with some precisely printed markers is used for the optimization of the primary calibration parameters.

## 3.1.

### Primary Calibration of the Structured Light System

Geometric model of the projector-camera-based SLS can be described in Fig. 1. The parameters required to estimate including the intrinsic parameters of both the camera and projector, as well as the extrinsic parameters between the camera and projector. To describe the system model more accurately, radial and tangential distortions are considered for both the camera and projector.

Let ${M}_{c}={[\begin{array}{ccc}{X}_{c}& {Y}_{c}& {Z}_{c}\end{array}]}^{\mathrm{T}}$ denote the 3-D coordinate of a spatial point with respect to the camera reference frame, and its corresponding image pixel coordinate on the camera plane can be denoted as ${m}_{c}={[\begin{array}{cc}{u}_{c}& {v}_{c}\end{array}]}^{\mathrm{T}}$. According to the pinhole model, the normalized form of ${m}_{c}$ can be written as

## (1)

$${\tilde{m}}_{c}=\left[\begin{array}{c}{\tilde{u}}_{c}\\ {\tilde{v}}_{c}\end{array}\right]=\left[\begin{array}{c}{X}_{c}/{Z}_{c}\\ {Y}_{c}/{Z}_{c}\end{array}\right].$$Considering the radial and tangential lens distortions, the undistorted expression of ${\tilde{\mathrm{m}}}_{c}$ can be expressed as

## (2)

$$L({\tilde{m}}_{c})={\tilde{m}}_{c}\xb7(1+{k}_{c1}{r}_{c}^{2}+{k}_{c2}{r}_{c}^{4}+{k}_{c3}{r}_{c}^{6})+{\mathrm{\Delta}}_{t}({\tilde{m}}_{c}),$$## (3)

$${\mathrm{\Delta}}_{t}({\tilde{m}}_{c})=\left[\begin{array}{c}2{k}_{c4}{\tilde{u}}_{c}{\tilde{v}}_{c}+{k}_{c5}({r}_{c}^{2}+2{\tilde{u}}_{c}^{2})\\ {k}_{c4}({r}_{c}^{2}+2{\tilde{v}}_{c}^{2})+2{k}_{c5}{\tilde{u}}_{c}{\tilde{v}}_{c}\end{array}\right].$$## (4)

$${\overline{x}}_{c}=\left[\begin{array}{c}{x}_{c}\\ {y}_{c}\\ 1\end{array}\right]={\mathbf{K}}_{c}\xb7L({\tilde{m}}_{c}),$$## (5)

$${\mathbf{K}}_{c}=\left[\begin{array}{ccc}{f}_{x}^{c}& {\alpha}_{c}\xb7{f}_{x}^{c}& {c}_{x}^{c}\\ 0& {f}_{y}^{c}& {c}_{y}^{c}\\ 0& 0& 1\end{array}\right].$$^{18}

^{,}

^{31}

^{,}

^{33}

The extrinsic parameters is expressed by a rotational matrix $\mathbf{R}$ and a translation vector $\mathbf{T}$ as

## (6)

$$\mathbf{R}=\left[\begin{array}{ccc}{R}_{11}& {R}_{12}& {R}_{13}\\ {R}_{21}& {R}_{22}& {R}_{23}\\ {R}_{31}& {R}_{32}& {R}_{33}\end{array}\right],\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}\mathbf{T}=\left[\begin{array}{c}{T}_{1}\\ {T}_{2}\\ {T}_{3}\end{array}\right].$$Therefore, the coordinates of ${M}_{c}$ and ${M}_{p}$ with respect to the camera and projector reference frames is related as

## (7)

$$\left[\begin{array}{c}{M}_{c}\\ 1\end{array}\right]=\left[\begin{array}{cc}\mathbf{R}& \mathbf{T}\\ 0& 1\end{array}\right]\xb7\left[\begin{array}{c}{M}_{p}\\ 1\end{array}\right].$$There are a total of 12 extrinsic parameters to be estimated for the camera and projector. For each corresponding point ${[\begin{array}{ccc}{x}_{c}& {y}_{c}& 1\end{array}]}^{\mathrm{T}}$ and ${[\begin{array}{ccc}{x}_{p}& {y}_{p}& 1\end{array}]}^{\mathrm{T}}$ on the camera and projector plane, a closed-form expression for the depth (${Z}_{c}$) in the camera reference frame is derived as^{34}

## (8)

$${Z}_{c}=({T}_{1}-{x}_{p}{T}_{3})/\u27e8-{\mathbf{R}}_{[1]}+{x}_{p}{\mathbf{R}}_{[3]},{\overline{x}}_{c}\u27e9,$$To calibrate the projector-camera-based SLS, the camera is first calibrated with a printed checkerboard pattern via the method in Ref. 17. The checkerboard pattern corners (${m}_{c}$) are extracted, and their corresponding 3-D points (${M}_{c}$) on the calibration plane can be estimated. Then, a closed-form solution is applied to solve the intrinsic and extrinsic parameters of the camera. By including the lens distortion parameters, the minimization of the reprojection errors is introduced as follows:

## (9)

$$\sum _{i=1}^{n}\sum _{j=1}^{m}{\Vert {m}_{ij}-\mathrm{proj}(\mathbf{K},{\mathbf{R}}_{i},{\mathbf{T}}_{i},{M}_{j})\Vert}^{2},$$To calibrate the projector, the calibrated camera can be used. With the calibration result of the camera, 3-D information of the calibration plane can be calculated. As a result, 3-D coordinates of the checkerboard corners on the projected patterns can be calculated with respect to the camera reference frame. Therefore, the correspondence of $\{{m}_{p},{M}_{p}\}$ can be calculated. Then, the calibration of the projector can be performed following the camera calibration procedure, and the extrinsic parameters $\mathbf{R}$ and $\mathbf{T}$ can be calculated from Eq. (7). For existing calibration methods of the projector-camera-based SLS, optimization of the calibration parameters was applied for the camera and projector separately. The optimization was performed with respect to the reprojection errors of pattern feature points in 2-D image space as given in Eq. (9). The following section describes how the parameters can be optimized in 3-D space to further improve the system calibration accuracy.

## 3.2.

### Optimization of Primary Calibration Parameters

As described in Sec. 3.1, optimization of calibration parameters with respect to the 2-D reprojection error criterion has been a standard step in existing calibration methods not only for the camera but also for the projector-camera-based SLSs. However, such a procedure is applied for the camera and projector separately and cannot reflect the real metric errors. In this section, an extra optimization procedure that performed in 3-D space is introduced to improve the calibration accuracy. The underlying principle of the proposed method is to treat all the calibration parameters of the projector-camera-based SLS as a global optimization problem. The primary calibration results in Sec. 3.1 are used as the initial values, and some objective functions are constructed to minimize the 3-D metric errors. By solving the nonlinear multiple-target optimization problem, the optimal calibration parameters can be obtained. Workflow of the proposed calibration procedure is shown in Fig. 2.

The object used for the optimization is very simple. To guarantee high flatness of the object surface, a flat glass with homogeneous reflectance is used. Some markers are uniformly printed on the glass surface with precise distance ($D$) as shown in Fig. 3. The reference plane with markers is first scanned by a group of structured light patterns.^{35} According to the coding strategy of Ref. 35, the first image that contains no pattern information is white. Based on this image, a random downsampling is applied to obtain a group of image points (${p}_{\mathrm{opt}}$). Then, a threshold is applied to separate the marker areas, and the centroids of markers (${p}_{m}$) can be calculated with subpixel accuracy. With the primary calibration parameters, 3-D coordinates of ${p}_{\mathrm{opt}}$ and ${p}_{m}$ can be calculated and denoted as ${p}_{\mathrm{opt}}$ and ${P}_{m}$, respectively. Based on the reconstructed 3-D points of ${p}_{\mathrm{opt}}$ and ${P}_{m}$, three objective functions are constructed to evaluate the 3-D reconstruction accuracy as follows:

1. Planarity error: The reference plane used for reconstruction can be viewed as a perfect plane. Without considering the calibration errors and reconstruction errors, the planarity of ${p}_{\mathrm{opt}}$ should be zero. Based on this a priori, a least-square fitting approach is applied to ${p}_{\mathrm{opt}}$. Suppose the number of sampling points is $S$, and the distance between the $i$’th sample point to the fitting plane is ${d}_{i}$, then the absolute mean fitting residuals (${E}_{p}$) can be defined as

2. Distance error: For each marker point ${P}_{j}\in {P}_{m}$, its average distance to all adjacent marker points in horizontal and vertical directions can be calculated and denoted as ${d}_{j}$. Suppose there are $J$ marker points on the reference plane, the distance error objective function is simply defined as

3. Angular error: Considering that the affine transformation may be caused by the inaccurate primary calibration parameters, then the last objective function is constructed to evaluate the angles between the marker points. For each marker point ${P}_{j}\in {P}_{m}$, by connecting it with all adjacent marker points, all the included angles $\theta $ can be calculated. The ground-truth value of $\theta $ is known as ${\theta}_{0}=90\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{deg}$ and ${\theta}_{j}$ is the average of all calculated angles, the angular error objective function is expressed as

There are a total of 30 parameters to optimize, i.e., eight intrinsic parameters in $\{{f}_{c},{c}_{c},{f}_{p},{c}_{p}\}$, 12 extrinsic parameters in $\mathbf{R}$ and $\mathbf{T}$, and 10 lens distortion parameters in $\{{k}_{c},{k}_{p}\}$,; the sensor skewness factors (${\alpha}_{c}$ and ${\alpha}_{p}$) are assumed to be 0 and are not considered in the optimization. A vector $\mathbf{x}$ is defined to represent all parameters to be optimized as

## (13)

$$\mathbf{x}={[{f}_{c},{c}_{c},{f}_{p},{c}_{p},\mathbf{R},\mathbf{T},{k}_{c},{k}_{p}]}^{\mathrm{T}}.$$With conventional calibration procedures,^{30}31.32.^{–}^{33} we obtain the initial value of $\mathbf{x}$ and denote it as ${\mathbf{x}}^{0}$. Therefore, a multiobjective optimization problem is established as

## (14)

$$\mathbf{min}\{{E}_{p}(\mathbf{x}),\alpha \xb7{E}_{d}(\mathbf{x}),\beta \xb7{E}_{\theta}(\mathbf{x})\},\phantom{\rule{0ex}{0ex}}\mathrm{s.t.}\text{\hspace{0.17em}\hspace{0.17em}}\mathbf{R}\xb7{\mathbf{R}}^{\mathrm{T}}=1,\phantom{\rule{0ex}{0ex}}{l}_{b}.[{x}_{1}^{0}\cdots {x}_{20}^{0}]\le [{x}_{1}\cdots {x}_{20}]\le {l}_{u}.[{x}_{1}^{0}\cdots {x}_{20}^{0}],\phantom{\rule{0ex}{0ex}}{\mathbf{l}}_{kb}\le [{x}_{21}\cdots {x}_{25}]\le {\mathbf{l}}_{ku},\phantom{\rule{0ex}{0ex}}{\mathbf{l}}_{kb}\le [{x}_{26}\cdots {x}_{30}]\le {\mathbf{l}}_{ku},$$## 4.

## Experimental Results

The experimental setup consists of one camera (point gray FL3-U3-32S2C-CS, with the resolution of $2080\times 1552\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$, USB3.0 interface, and 60 fps), one digital light procession projector (Benq GP1, with the resolution of $1024\times 768\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$, HDMI interface, and 60 Hz), and a rotation table as shown in Fig. 4. The camera is mounted with a lens of 10 mm. The rotation table is used to realize the multiple-view 3-D scanning. The working distance of the system is about 700 mm, and the scanning field is about $400\times 300\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{mm}$. The system is first calibrated with a printed checkerboard pattern via conventional calibration methods,^{30}31.32.^{–}^{33} where the 2-D reprojection error criterion is used for the optimization of system parameters. A flat glass with homogeneous reflectance is used for the parameter optimization. Some circular markers are uniformly printed on the glass surface with a precise distance of 100 mm. The structured light method described in Ref. 35 is used for the 3-D scanning. The first experiment is conducted on the reference plane, and the 3-D measurement results with respect to three error criteria are provided to evaluate different calibration parameters. The second experiment is applied with the rotational 3-D scanning system to evaluate the 3-D reconstruction results qualitatively.

## 4.1.

### Calibration Results with and without Optimization

The Procam-calib tool^{30}^{,}^{31} and SLS-calib tool,^{32}^{,}^{33} two usual calibration tools for the projector-camera-based SLS, are used for the primary calibration in our work. Figure 5 shows the checkerboard calibration plane and planar surface with markers used for parameter optimization. The reference plane is scanned five times with different positions and poses in the working volume. With the downsampling procedure, 10,000 points are randomly selected and reconstructed for the calculation of the planarity error. The calibration parameters by the SLS-calib tool are used as the initial values, and the optimization algorithm is implemented with MATLAB 2012. All the calibration parameters by the Procam-calib tool, SLS-calib tool, and the proposed method are given in Table 1. From the results, we can see that the major differences of three calibration results appear in the distortion factors ${k}_{c}$ and ${k}_{p}$.

## Table 1

Calibration parameters by Procam-calib tool, SLS-calib tool, and the proposed method.

Method | fc/fp | cc/cp | kc/kp | R | T |
---|---|---|---|---|---|

Procam-calib tool | 4096.56 | 1059.85 | 0.012, 0.181, 0.0, $-0.0007$, $-0.0025$ | 0.9515, $-0.0740$, $-0.2986$ | 228.08 |

4095.61 | 826.02 | 0.0081, 0.9763, $-0.2164$ | 11.32 | ||

1433.94 | 517.57 | 0.04, $-0.004$, 0.0, $-0.018$, 0.0001 | 0.3075, 0.2035, 0.9295 | 60.76 | |

1426.67 | 789.51 | ||||

SLS-calib tool | 4096.12 | 1057.50 | 0.017, 0.227, 0.0, 0.0007, 0. 0029 | 0.9514, $-0.0732$, $-0.2989$ | 227.31 |

4095.28 | 827.69 | $-0.0057$, 0.9669, $-0.2551$ | 10.25 | ||

1437.33 | 518.17 | 0.073, 0.0512, 0.0, 0.0096, 0.001 | 0.3077, 0.2444, 0.9195 | 65.47 | |

1430.48 | 846.25 | ||||

Proposed method | 4091.07 | 1058.08 | $-0.019$, $-0.011$, $-0.002$, 0.0021, $-0.0042$ | 0.9523, $-0.0733$, $-0.2967$ | 226.41 |

4090.99 | 827.42 | $-0.0288$, 0.9669, $-0.2551$ | 10.72 | ||

1438.23 | 518.02 | $-0.046$, $-0.01$, $-0.018$, $-0.0029$, $-0.0007$ | 0.3072, 0.2442, 0.9189 | 65.18 | |

1430.70 | 846.30 |

## 4.2.

### Evaluation of Calibration Accuracy

The reference plane is also used to evaluate the accuracy of different calibration parameters. By changing the position and pose of the reference plane with respect to that used in the optimization stage, it was reconstructed by three calibration results as listed in Table 1. The scanning region is about $400\times 300\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{mm}$. By fitting a plane to the three reconstructed point clouds, distributions of the fitting errors are as displayed in Fig. 6. From the results, we can see that, with classical calibration parameters, distinct reconstruction errors arise at the plane corners and boundaries as shown in Figs. 6(a) and 6(b). This was mainly caused by the inaccurately calibrated lens distortion parameters. The values of metric error terms ${E}_{p}$, ${E}_{d}$, and ${E}_{\theta}$ are also calculated and given in Table 2. From the results, we can see that the planarity by the optimized calibration parameters can be improved 5 to 10 times over classical calibration methods. The distance error by the optimized parameters is very close to 0 compared with the values of 0.219 and 0.166 mm by the other two methods. The angular error can also be improved. To evaluate the robustness of the optimized calibration parameters, the 3-D reconstruction procedure is repeated by changing the position and pose of the target plane, and similar measurement results can be obtained.

## Table 2

Evaluation of measurement accuracy by different calibration parameters.

Method | Ep (mm) | Ed (mm) | Eθ (deg) | |||
---|---|---|---|---|---|---|

max | min | mean | std. | |||

Procam-calib tool | 0.680 | $-0.851$ | 0.144 | 0.196 | 0.219 | 0.126 |

SLS-calib tool | 0.721 | $-0.397$ | 0.09 | 0.114 | 0.166 | 0.083 |

Proposed method | 0.078 | $-0.009$ | 0.024 | 0.021 | 0.00024 | 0.069 |

## 4.3.

### Qualitative Evaluation of Calibration Parameters

This experiment is used to evaluate the 3-D reconstruction quality by different calibration parameters. For the objects with free-form surfaces, the calibration accuracy is difficult to be evaluated from a single 3-D scanning. To make the comparison, a rotation table is introduced to the projector-camera-based SLS as shown in Fig. 4. The object is rotated with a fixed angle of 30 deg, and the rotational axis is calculated by the method in Ref. 37. With a complete scanning, 12 surface patches can be obtained. To align these surface patches, a rigid transformation can be applied with the calculated rotational axis and the given rotation angle.

A plaster pot is used in this experiment as shown in Fig. 7(a). The calibration parameters by the Procam-calib tool, SLS-calib tool, and the proposed approach are used for the 3-D reconstruction. The registered 3-D models are shown in Figs. 7(b)–7(d). Red lines on the 3-D models indicate the gaps between adjacent surface patches. In other words, the reconstructed surface patches are distorted and cannot be well aligned. From Fig. 7(b), we can see that the reconstructed 3-D model with calibration parameters by the Procam-calib tool has distinct distortions in the areas of the surface patch boundaries. In these areas, the adjacent scanning surface patches cannot be well registered. The reconstruction quality can be improved by the calibration results of the SLS-calib tool as shown in Fig. 7(c), but a few surface regions still cannot be well aligned. Figure 7(d) shows the reconstructed 3-D model by the calibration parameters of our method, where most of the surface patches can be precisely registered. Figure 8(a) shows another 3-D object, which has abundant surface details, like the hair and some texts carved on it. Some areas on the model are enlarged as shown in Fig. 8(b) for close observation. From the results, we can see that tiny features can be precisely registered, which benefits the accurate calibration parameters. More experimental results are provided in Fig. 9 to show the high 3-D reconstruction quality brought by the accurate calibration parameters. With the above evaluations, calibration accuracy of the proposed method can be fully demonstrated.

## 5.

## Conclusions

In this work, an accurate and practical calibration method is introduced for the projector-camera-based SLS. In this method, conventional calibration means is first applied for the primary calibration of the system. Then, a planar surface with some markers is used for the parameters’ optimization. All the intrinsic parameters of the camera and projector and the extrinsic parameters between them are considered a global optimization problem. Compared with classical calibration means, which apply the parameter optimization in 2-D image space to minimize the reprojection errors, the proposed optimization approach is executed in 3-D space directly. Three error criteria are introduced as the objective functions, i.e., planarity error, distance error, and the angular error. Using the primary calibration parameters as initial values, the nonlinear multiple-target optimization problem can be solved to obtain the optimal calibration parameters.

The first experiment is conducted on the planar surface. These results show that, by the proposed calibration method, 3-D measurement accuracy can be improved 5 to 10 times compared with classical calibration means. The second experiment is applied to evaluate the 3-D reconstruction results qualitatively. These results show that 3-D models with higher quality can be obtained by the optimized calibration parameters. With the above comparisons, improvement of the calibration accuracy by the proposed calibration method can be fully demonstrated. The proposed method is simple and easy to implement, which can be widely used for the SLS calibration to improve its measuring accuracy.

## Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Nos. 61375041 and U1613213) and the Shenzhen Science Plan (Nos. JCYJ20140509174140685, JCYJ20150401150223645, and JSGG20150925164740726).

## References

## Biography

**Lei Nie** received his BS degree in computer science from Beihang University in 2010. He is a PhD student at Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences (CAS). His current research interests include machine learning and stereo vision.

**Yuping Ye** received his BS degree in electronics from Wuhan University in 2013. He is a master’s student at Shenzhen Institutes of Advance Technology, CAS. His current research interest is structured light-based 3-D sensing technology.

**Zhan Song** received his PhD in mechanical and automation engineering from Chinese University of Hong Kong, Hong Kong, in 2008. He is currently with the Shenzhen Institutes of Advanced Technology, CAS, as a professor. His current research interests include structured-light-based sensing, image processing, 3-D face recognition, and human-computer interaction.