13 May 2015 Improving the accuracy of phase-shifting techniques
Author Affiliations +
Optical Engineering, 54(5), 054102 (2015). doi:10.1117/1.OE.54.5.054102
Abstract
The traditional phase-shifting profilometry technique is based on the projection of digital interference patterns and computation of the absolute phase map. Recently, a method was proposed that used phase interpolation to the corner detection, at subpixel accuracy in the projector image for improving the camera–projector calibration. We propose a general strategy to improve the accuracy in the search for correspondence that can be used to obtain high precision three-dimensional reconstruction. Experimental results show that our strategy can outperform the precision of the phase-shifting method.
Cruz-Santos, López-García, and Redondo-Galvan: Improving the accuracy of phase-shifting techniques

1.

Introduction

Phase-shifting profilometry (PSP) has been used for accurate three-dimensional (3-D) measurement of objects in many applications such as biometric, entertainment, and industrial inspection, among others. An important stage in PSP is the camera–projector calibration. Thus, improved phase-shifting algorithms have been proposed. Yu et al.1 propose an algorithm based on interpolating absolute phase values at known features points with subpixel accuracy in the image data. A similar idea was proposed by Li et al.2 and used to improve the camera and projector calibration. This method, however, has only been applied when a set of feature points can be determined with high accuracy, usually the detected corners of a chessboard calibration pattern. In this paper, we propose a general strategy to enhance the accuracy of the 3-D measurement of objects using phase-shifting techniques by improving the search for correspondence. It is based on the interpolation of absolute phase values over a set of feature points in the image data. We show that for textured objects, this method produces high precision 3-D coordinates surface for the set of feature points. For textureless objects, a texture image can be projected in order to use the proposed method. The application of our method was two-fold, for improving the accuracy of the camera–projector calibration and for improving the accuracy of the phase-to-height conversion algorithm. We validate our method for real data and we compare the obtained object profiles with the traditional phase-shifting algorithm without phase interpolation.

2.

Principle

The intensities of the phase-shifted3 images at every pixel (x,y) are given by

(1)

Ik(x,y)=I+Icos[Φ+δk],
where I(x,y) is the average intensity, I(x,y) is the intensity modulation, δk is the introduced shift, and Φ(x,y) is the phase to be determined for k=1,,N. A least-squares solution4 for the phase with δk=2π(k1)/N is given by

(2)

Φ(x,y)=tan1k=1NIksin(2πkN)k=1NIkcos(2πkN).

The phase Φ is called the wrapped phase which is characterized by the module 2π discontinuities and a continuous phase Φ(x,y) can be obtained by using a unwrapping algorithm.5 Zhang and Huang6 proposed a method for projector calibration based on absolute phase maps which can be obtained if a set of reference points are known between the camera and projector. The absolute phase is computed as Φ(x,y)=Φ(x,y)Φ0, where Φ0 is the average of phase values of the set of reference points. An one-to-one correspondence between the camera and the projector pixels was obtained considering vertical and horizontal absolute phase maps, Φv and Φh, respectively. Thus, for every camera pixel (x,y), the corresponding projector pixel (x,y) is given as

(3)

x=p·Φh(x,y)πH2,y=p·Φv(x,y)πW2,
where W, H, and p are given values corresponding to the width, height, and pitch of the fringe patterns.

3.

Our Proposal

Our proposal consists of two main stages: camera–projector calibration and 3-D reconstruction. All steps to obtain 3-D reconstruction are shown in Fig. 1. The traditional output of the first stage is the set of intrinsic and extrinsic calibration parameters. Instead, we generate another set of calibration parameters by using the last two steps as follows. Corner detection and interpolation is the detection of corners (at subpixel accuracy) for each image of the calibration pattern in the camera. Given the list of corner positions, the corresponding projector pixels are calculated through Eq. (3) and phase interpolation.7 Then, camera–projector calibration8 is performed using this list of corners.

Fig. 1

Fundamental stages for the camera–projector calibration and three-dimensional reconstruction.

OE_54_5_054102_f001.png

The second stage is the 3-D reconstruction which consists of (a) projection of six vertical and horizontal interference patterns over an object into the scene, also vertical and horizontal central lines as reference images; (b) computation of the continuous phase and absolute phase of the object; (c) detection of feature points in the captured image data of the object, that will be detailed briefly; and (d) computation of 3-D coordinates using a phase-to-height algorithm.

The interpolation of feature points is taken as special step that is necessary to describe. First, an image of the object is captured onto the scene. Second, an initial set of corners is detected using a smallest univalue segment assimilating nucleus9 corner detector. Third, a refinement algorithm10 is applied to the initial set of corners, and obtain a set of corners at subpixel accuracy. Finally, the corresponding projector positions are computed for the set of detected corners using phase interpolation. With these steps, for objects with regular texture many corners are detected. However, the strategy fails to find significant set of corners in textureless objects. An alternative step, for this kind of objects is to project a texture image on the object in order to provide it with a regular texture.

4.

Experimental Evaluation

In order to validate the calibration results, we compare two cases: in the first case projector images were generated to calibrate the projector and in the second case we use the detected corners in the camera calibration patterns to interpolate the corresponding corners in the projector. Table 1 shows standard deviation of the reprojection corners error (in pixel) in both x and y directions for the projector calibration. Row 1 in Table 1 shows the pixel error of the projector calibration using projector images, while rows 2 to 5 show the pixel errors using nearest, linear, cubic, and spline interpolation of the set of corners in the camera. We found that linear interpolation of the detected corners in the camera gives the best result for the calibration of the projector, whereas nearest interpolation of the detected corners in the camera for projector calibration gives the worst result. Figure 2 shows the reprojection corner error for the camera calibration and projector calibration using linear interpolation of the camera corners.

Table 1

Pixel error in the x and y directions of the projector calibration.

Methodx (in pixels)y (in pixels)
Traditional0.098170.14074
Nearest0.164800.19747
Linear0.083060.11522
Cubic0.085810.11739
Spline0.086950.11856

Fig. 2

Reprojection corner error of the calibration for the (a) camera and (b) projector using linear interpolation of corners on the captured calibration patterns.

OE_54_5_054102_f002.png

Figure 3(a) shows a steel plane surface on which a texture image was projected in order to provide it with a regular texture. Figure 3(b) shows the detected corners over Fig. 3(a) and Fig. 3(c) shows the detected corners at subpixel precision. Figure 3(d) shows the reconstruction profile of the plane surface using the traditional phase-shifting method and Fig. 3(e) shows the reconstruction using interpolation of feature points obtained through phase interpolation shown in Fig. 3(c). The reconstruction results shown in Figs. 3(d) and 3(e) were obtained using the calibration parameters that uses linear interpolation in Table 1. We fit an ideal plane to point clouds shown in Fig. 3(d) and Fig. 3(e) for the traditional and proposed method, respectively. Table 2 show quantitative results for both methods and in the last, nearest, linear, cubic, and spline interpolation of feature points were used. The mean, sum of squared errors (SSE), and standard deviation of the orthonormal distances of every point to the fitted plane are shown. These quantities were obtained by averaging 100 iterations of different samples of 1000 randomly chosen points. We found that the proposed method using linear interpolation of feature points has better accuracy than the traditional method, whereas using nearest-neighbor interpolation of feature points has the worst accuracy over all the cases.

Fig. 3

(a) High precision steel surface, (b) projected texture, (c) subpixel detected corners on image shown in (b), (d) reconstruction using the classic phase-shifting method, and (e) reconstruction using the phase interpolation method for the detected corners shown in (c).

OE_54_5_054102_f003.png

Table 2

The sum of squared errors (SSE), the mean distance, and the standard deviation of the orthogonal distances of 1000 randomly chosen points to a fitted plane are shown. The units of the results are given in millimeters.

MethodSSEMean distanceStd
Traditional10.4490.0810.049
Nearest12.9370.0890.061
Linear9.7360.0790.045
Cubic9.8130.0790.047
Spline9.7970.0790.047

5.

Conclusions

A method is proposed to improve the accuracy of the traditional phase-shifting techniques. It is based on the detection of a set of feature points in the image space at subpixel accuracy, and phase interpolation in order to generate its corresponding projector positions. The set of feature points is computed for objects with enough texture, and for a textureless object, an image is projected over the scene. We found by experimentation that the proposed method improves the accuracy of the traditional phase-shifting technique.

References

1. 

H. Yu et al., “3D profilometry system based on absolute phase calibration,” Proc. SPIE 6357, 63570M (2006).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.716744Google Scholar

2. 

Z. Li et al., “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).OPEGAR0091-3286http://dx.doi.org/10.1117/1.2931517Google Scholar

3. 

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).AOPAC71943-8206http://dx.doi.org/10.1364/AOP.3.000128Google Scholar

4. 

Z. Malacara and M. Servín, Eds., Interferogram Analysis For Optical Testing, 2nd ed., CRC Press, Taylor & Francis, Boca Raton, Florida (2010).Google Scholar

5. 

D. Ghiglia and M. Pritt, Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software, John Wiley and Sons, New York (1998).Google Scholar

6. 

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).OPEGAR0091-3286http://dx.doi.org/10.1117/1.2336196Google Scholar

7. 

W. H. Press et al., Eds., Numerical Recipes in C: The Art of Scientific Computing, 2nd ed., Cambridge University Press, New York (1992).Google Scholar

8. 

J. Y. Bouguet, “Camera calibration toolbox for MATLAB,” www.vision.caltech.edu/bouguetj (1995).Google Scholar

9. 

S. M. Smith and J. M. Brady, “SUSAN–a new approach to low level image processing,” Int. J. Comput. Vision 23(1), 45–78 (1995).IJCVEQ0920-5691http://dx.doi.org/10.1023/A:1007963824710Google Scholar

10. 

J. Shi and C. Tomasi, “Good features to track,” in Proc. 1994 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR’94), pp. 593–600 (1994).Google Scholar

Biographies for the authors are not available.

William Cruz-Santos, Lourdes López-García, Arturo Redondo-Galvan, "Improving the accuracy of phase-shifting techniques," Optical Engineering 54(5), 054102 (13 May 2015). http://dx.doi.org/10.1117/1.OE.54.5.054102
Submission: Received ; Accepted
JOURNAL ARTICLE
3 PAGES


SHARE
KEYWORDS
Projection systems

Calibration

Cameras

Phase shifts

Corner detection

3D image reconstruction

Phase shift keying

Back to Top