7 May 2014 Nonlinearity calibrating algorithm for structured light illumination
Author Affiliations +
Optical Engineering, 53(5), 050501 (2014). doi:10.1117/1.OE.53.5.050501
In structured light illumination (SLI), the nonlinear distortion of the optical devices dramatically ruins accuracy of three-dimensional reconstruction when using only a small number of projected patterns. We propose a universal algorithm to calibrate these device nonlinearities to accurately precompensate the patterns. Thus, no postprocessing is needed to correct for the distortions while the number of patterns can be reduced down to as few as possible. Theoretically, the proposed method can be applied to any SLI pattern strategy. Using a three-pattern SLI method, our experimental results will show a 25× to 60× reduction in surface variance for a flat target, depending upon any surface smoothing that might be applied to remove Gaussian noise.
Liu, Wang, Lau, Barner, and Kiamilev: Nonlinearity calibrating algorithm for structured light illumination



In order to achieve real-time three-dimensional (3-D) reconstructions by using structured light illumination (SLI), reducing the number of projected patterns to as few as possible runs into practical limitations from the hardware. For example, Liu et al.1 demonstrated both theoretically and experimentally that the gamma distortion is of little or no consequence when scanning with more than eight patterns. As a result, Liu et al. proposed postprocessing methods which are very time-consuming.

Wang et al.2 proposed a gamma compensation method by applying an inverse gamma function to their patterns before the patterns were projected, but in practice, the nonlinearity of a projector cannot be simply described by a single parameter.3 Su et al.4 proposed a method by defocusing their projector to filter out the high-order harmonics of patterns, but this has the unwanted consequence of reducing the signal power of the signals. Although Zhang et al. and Xu et al. developed several lookup-table-based solutions56.7 to compensate for distorted phase, their accuracy of compensation depends greatly on the length of tables as well as the method of interpolation.

In this letter, we intend to ignore the gamma model limitation of these previous studies and allow for arbitrary projector responses, and we intend to make use of the coding relationship between phase and intensities to propose a robust and universal intensity precompensation algorithm for solving for nonlinearities in grayscale SLI pattern schemes. Our process involves measuring the phase response of the scanner, distorted by nonlinearities in the projector, to produce an intensity precompensation look-up table (IPLUT) and then precompensating the SLI patterns with the IPLUT before scanning such that postprocessing is no longer warranted. Theoretically, this method can be applied to any SLI pattern scheme but will be verified by three-step phase measuring profilometry (PMP). Experimental results will show a 25× to 60× reduction in the root-mean-square (RMS) of phase error, depending on noise suppression through smoothing filters.



As an active stereo vision technique, SLI computes the correspondence between a pixel, (xc,yc), in a camera and a pixel, (xp,yp), in a projector based upon decoding the structured patterns. In the case of an epipolar-rectified camera/projector pair, deriving correspondences for (xc,yc) need only search for the corresponding coordinate yp. A common expression for the projected intensity for yp coded by visible light intensity is given by


where Iypp is the projected intensity for all projector pixels with the row coordinate yp, f(·) is an ideal mapping of the row coordinate yp for the selected SLI coding strategy, and g(·) is the nonlinear distortion function of the projector. After Iypp is reflected by the scanned object, the captured intensity is modeled as


where α[0,1] is the reflectivity of a scanned object and β is the intensity of ambient light.

In order to calibrate the nonlinear relationship between Iypp and yp in Eq. (1) via Eq. (2), an intuitive approach is to project a series of constant intensity patterns ranging from 0 to 2k1 (k is the grayscale level), and then using the measured output intensities recorded by the camera to build a tone correction curve to linearize the projector. But again, this requires a calibration procedure using 2k patterns. If we instead take advantage of the coding principle of SLI, i.e., the relationship between the phase and the intensities, we can significantly reduce the number of calibration patterns and robustly calibrate the nonlinearities of the SLI system.

For the purpose of eliminating the phase error introduced by projector nonlinearities in g{·} of Eq. (1), we employ uncompensated PMP on a target surface (not necessarily flat nor white) to obtain a reference phase map where the n’th of N projected patterns is defined as


where Ap and Bp are constants controlling the offset and modulation of the sinusoid, f is the number of sinusoidal periods traversing the pattern from top to bottom, and H is the vertical resolution of the projector. Given sufficiently large N, the normalized phase, ϕr, calculated by


where ϕ is unwrapped from the wrapped phase ϕw computed by


will vary linearly from 0 to 1 across the captured image.1

In situation, where yp is linearly coded as


the nonlinearly distorted phase ϕd, ranging from 0 to 1, can be directly computed from Eq. (2) by canceling α and β, and we can, therefore, derive a look-up table mapping the nonlinearly distorted phase image (ϕd) to the reference phase image (ϕr) as


that effectively corrects the distortion to produce a linear response. Note that the computed ϕr and ϕd are not related to either α nor β; therefore, a target, that is neither flat nor of uniform texture but is large enough to nearly match the illumination area of the projector is necessary for calibration. Now, while we could employ Eq. (7) as an intensity precompensation step, we apply IPLUT[f(yp)] to the intensity values, f(yp), of Eq. (3) for the projected patterns stored in memory, as a one-time process, such that the captured intensities will linearly respond and no any postprocessing is needed to correct nonlinearities.



In our experiments, we calibrate the nonlinearities of our system and construct an IPLUT and then employ three-step (the least numbered) PMP to verify the proposed intensity precompensation algorithm. Our SLI system includes an AVT (Burnaby, Canada) Prosilica GC650M camera and a Casio XJ-A155V projector. Also, our verification targets include a nearly flat and white plaster board wall, a color-textured pillow [Fig. 1(a)] and a color-textured plush toy bear [Fig. 1(b)]. The textured pillow is used as a nonlinearity calibration to show that our proposed procedure does not involve a classic flat and textureless target; the white wall is used for evaluating the performance of our algorithm for phase error; and the textured bear is used for visually demonstrating the effect of our method.

Fig. 1

Scanned objects: (a) a color-textured pillow and (b) a color-textured bear.


Using our textured pillow along with PMP patterns generated by Eq. (3) using parameters N=60, f=8, and Ap=Bp=127.5 (for k=8), ϕr is computed through Eq. (4) with ϕd extracted from the pattern coded with Eq. (6). The relationship of ϕd versus ϕr is shown in Fig. 2 with the dashed curve. In Eq. (7), an IPLUT is constructed as shown in the solid curve in Fig. 2. We note that the exact method for deriving this mapping can be any traditional curve fitting technique where missing intensity values in either the distorted or ideal intensity map are derived by interpolation from the available measurements. Also, after the intensities of the directly coded pattern are precompensated through the IPLUT, we can see that the new intensity response linearly varies as illustrated in the dotted line in Fig. 2.

Fig. 2

Original response, intensity pre-compensation look-up table and intensity precompensated response.


Using a nearly flat wall with uncorrected three-step PMP (f=8, N=3), the absolute phase error for the phase computed by Eq. (5) is severely distorted as illustrated in Fig. 3 (solid line), which is the cross-section of phase error along the 320th column. After the IPLUT, derived from the textured pillow, is applied, the phase error is significantly reduced, as illustrated by the dashed line in Fig. 3. When we compare the RMS of errors of the two, we see a reduction by 25× in error from 0.2409 to 0.0096. Although we investigate the whole valid area of the phase image, the RMS of phase error is reduced from 0.2467 to 0.0098 still by 25×. Moreover, if we suppress noise by averaging 20 groups of scans as Liu et al. did in Ref. 1, the phase error will be further reduced by a factor of 60.

Fig. 3

Cross-section of phase error at the 320th column.


Now, using the phase computed from the intensity-precompensated three-step PMP patterns, we can see the significant improvement in 3-D scan quality for the color-textured bear, as shown in Fig. 4(c), compared with the depth extracted by a first generation of Microsoft (Redmond, Washington) Kinect camera [Fig. 4(a)] and our SLI system before the nonlinearity is precompensated [Fig. 4(b)], respectively. In Fig. 4(b), obvious wave-like nonlinear distortion is shown in the 3-D reconstruction of the bear and the RMS of error along depth direction is 0.5930, while in Fig. 4(c), such distortion is effectively corrected and the RMS of error is decreased to 0.0258 by 23×.

Fig. 4

Depth of the bear extracted by (a) MS Kinect camera, (b) structured light illumination (SLI) with nonlinearity, and (c) SLI with correcting nonlinearity by using the proposed algorithm.




In this letter, we propose a universal intensity precompensation algorithm to correct nonlinear distortion for an SLI system. With the algorithm, intensities of the projected patterns can be precompensated before being projected such that no postprocessing is needed. Thus, we ultimately demonstrate that we can continue to use a minimum number of SLI patterns without extensive correction procedures. Verified by three-step PMP, the RMS of phase error can be reduced by 25× without suppressing noise and 60× with suppressing noise. We believe the proposed algorithm is helpful for those one-shot color coding algorithms, and it is our future work.


This work was sponsored, in part, by the research startup funding of Sichuan University, China, under contract #2082204164059, and by the Science and Technology Support Program of Sichuan Province, China, under contract #2014GZ0005.


1. K. Liuet al., “Gamma model and its analysis for phase measuring profilometry,” J. Opt. Soc. Am. A 27(3), 553–562 (2010).JOAOD60740-3232 http://dx.doi.org/10.1364/JOSAA.27.000553 Google Scholar

2. Z. WangD. A. NguyenJ. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Laser Eng. 48(2), 218–225 (2010).OLENDN0143-8166 http://dx.doi.org/10.1016/j.optlaseng.2009.06.005 Google Scholar

3. H. GuoH. HeM. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.43.002906 Google Scholar

4. X. Suet al., “Automated phase-measuring profilometry using defocused projection of a ronchi grating,” Opt. Commun. 94(6), 561–573 (1992).OPCOB80030-4018 http://dx.doi.org/10.1016/0030-4018(92)90606-R Google Scholar

5. Y. Xuet al., “Phase error compensation for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 50(17), 2572–2581 (2011).OLENDN0143-8166 http://dx.doi.org/10.1364/AO.50.00257 Google Scholar

6. S. ZhangP. S. Huang, “Phase error compensation for a 3-d shape measurement system based on the phase-shifting method,” Opt. Eng. 46(6), 063601 (2007).OPEGAR0091-3286 http://dx.doi.org/10.1117/1.2746814 Google Scholar

7. S. ZhangS. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.46.000036 Google Scholar

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Kai Liu, Shuaijun Wang, Daniel L. Lau, Kenneth E. Barner, Fouad E. Kiamilev, "Nonlinearity calibrating algorithm for structured light illumination," Optical Engineering 53(5), 050501 (7 May 2014). https://doi.org/10.1117/1.OE.53.5.050501

Back to Top