|
1.IntroductionIn order to achieve real-time three-dimensional (3-D) reconstructions by using structured light illumination (SLI), reducing the number of projected patterns to as few as possible runs into practical limitations from the hardware. For example, Liu et al.1 demonstrated both theoretically and experimentally that the gamma distortion is of little or no consequence when scanning with more than eight patterns. As a result, Liu et al. proposed postprocessing methods which are very time-consuming. Wang et al.2 proposed a gamma compensation method by applying an inverse gamma function to their patterns before the patterns were projected, but in practice, the nonlinearity of a projector cannot be simply described by a single parameter.3 Su et al.4 proposed a method by defocusing their projector to filter out the high-order harmonics of patterns, but this has the unwanted consequence of reducing the signal power of the signals. Although Zhang et al. and Xu et al. developed several lookup-table-based solutions5–7 to compensate for distorted phase, their accuracy of compensation depends greatly on the length of tables as well as the method of interpolation. In this letter, we intend to ignore the gamma model limitation of these previous studies and allow for arbitrary projector responses, and we intend to make use of the coding relationship between phase and intensities to propose a robust and universal intensity precompensation algorithm for solving for nonlinearities in grayscale SLI pattern schemes. Our process involves measuring the phase response of the scanner, distorted by nonlinearities in the projector, to produce an intensity precompensation look-up table (IPLUT) and then precompensating the SLI patterns with the IPLUT before scanning such that postprocessing is no longer warranted. Theoretically, this method can be applied to any SLI pattern scheme but will be verified by three-step phase measuring profilometry (PMP). Experimental results will show a to reduction in the root-mean-square (RMS) of phase error, depending on noise suppression through smoothing filters. 2.MethodAs an active stereo vision technique, SLI computes the correspondence between a pixel, , in a camera and a pixel, , in a projector based upon decoding the structured patterns. In the case of an epipolar-rectified camera/projector pair, deriving correspondences for need only search for the corresponding coordinate . A common expression for the projected intensity for coded by visible light intensity is given by where is the projected intensity for all projector pixels with the row coordinate , is an ideal mapping of the row coordinate for the selected SLI coding strategy, and is the nonlinear distortion function of the projector. After is reflected by the scanned object, the captured intensity is modeled as where is the reflectivity of a scanned object and is the intensity of ambient light.In order to calibrate the nonlinear relationship between and in Eq. (1) via Eq. (2), an intuitive approach is to project a series of constant intensity patterns ranging from 0 to ( is the grayscale level), and then using the measured output intensities recorded by the camera to build a tone correction curve to linearize the projector. But again, this requires a calibration procedure using patterns. If we instead take advantage of the coding principle of SLI, i.e., the relationship between the phase and the intensities, we can significantly reduce the number of calibration patterns and robustly calibrate the nonlinearities of the SLI system. For the purpose of eliminating the phase error introduced by projector nonlinearities in of Eq. (1), we employ uncompensated PMP on a target surface (not necessarily flat nor white) to obtain a reference phase map where the ’th of projected patterns is defined as where and are constants controlling the offset and modulation of the sinusoid, is the number of sinusoidal periods traversing the pattern from top to bottom, and is the vertical resolution of the projector. Given sufficiently large , the normalized phase, , calculated by where is unwrapped from the wrapped phase computed by will vary linearly from 0 to 1 across the captured image.1In situation, where is linearly coded as the nonlinearly distorted phase , ranging from 0 to 1, can be directly computed from Eq. (2) by canceling and , and we can, therefore, derive a look-up table mapping the nonlinearly distorted phase image () to the reference phase image () as that effectively corrects the distortion to produce a linear response. Note that the computed and are not related to either nor ; therefore, a target, that is neither flat nor of uniform texture but is large enough to nearly match the illumination area of the projector is necessary for calibration. Now, while we could employ Eq. (7) as an intensity precompensation step, we apply to the intensity values, , of Eq. (3) for the projected patterns stored in memory, as a one-time process, such that the captured intensities will linearly respond and no any postprocessing is needed to correct nonlinearities.3.ResultsIn our experiments, we calibrate the nonlinearities of our system and construct an IPLUT and then employ three-step (the least numbered) PMP to verify the proposed intensity precompensation algorithm. Our SLI system includes an AVT (Burnaby, Canada) Prosilica GC650M camera and a Casio XJ-A155V projector. Also, our verification targets include a nearly flat and white plaster board wall, a color-textured pillow [Fig. 1(a)] and a color-textured plush toy bear [Fig. 1(b)]. The textured pillow is used as a nonlinearity calibration to show that our proposed procedure does not involve a classic flat and textureless target; the white wall is used for evaluating the performance of our algorithm for phase error; and the textured bear is used for visually demonstrating the effect of our method. Using our textured pillow along with PMP patterns generated by Eq. (3) using parameters , , and (for ), is computed through Eq. (4) with extracted from the pattern coded with Eq. (6). The relationship of versus is shown in Fig. 2 with the dashed curve. In Eq. (7), an IPLUT is constructed as shown in the solid curve in Fig. 2. We note that the exact method for deriving this mapping can be any traditional curve fitting technique where missing intensity values in either the distorted or ideal intensity map are derived by interpolation from the available measurements. Also, after the intensities of the directly coded pattern are precompensated through the IPLUT, we can see that the new intensity response linearly varies as illustrated in the dotted line in Fig. 2. Using a nearly flat wall with uncorrected three-step PMP (, ), the absolute phase error for the phase computed by Eq. (5) is severely distorted as illustrated in Fig. 3 (solid line), which is the cross-section of phase error along the 320th column. After the IPLUT, derived from the textured pillow, is applied, the phase error is significantly reduced, as illustrated by the dashed line in Fig. 3. When we compare the RMS of errors of the two, we see a reduction by in error from 0.2409 to 0.0096. Although we investigate the whole valid area of the phase image, the RMS of phase error is reduced from 0.2467 to 0.0098 still by . Moreover, if we suppress noise by averaging 20 groups of scans as Liu et al. did in Ref. 1, the phase error will be further reduced by a factor of 60. Now, using the phase computed from the intensity-precompensated three-step PMP patterns, we can see the significant improvement in 3-D scan quality for the color-textured bear, as shown in Fig. 4(c), compared with the depth extracted by a first generation of Microsoft (Redmond, Washington) Kinect camera [Fig. 4(a)] and our SLI system before the nonlinearity is precompensated [Fig. 4(b)], respectively. In Fig. 4(b), obvious wave-like nonlinear distortion is shown in the 3-D reconstruction of the bear and the RMS of error along depth direction is 0.5930, while in Fig. 4(c), such distortion is effectively corrected and the RMS of error is decreased to 0.0258 by . 4.ConclusionIn this letter, we propose a universal intensity precompensation algorithm to correct nonlinear distortion for an SLI system. With the algorithm, intensities of the projected patterns can be precompensated before being projected such that no postprocessing is needed. Thus, we ultimately demonstrate that we can continue to use a minimum number of SLI patterns without extensive correction procedures. Verified by three-step PMP, the RMS of phase error can be reduced by without suppressing noise and with suppressing noise. We believe the proposed algorithm is helpful for those one-shot color coding algorithms, and it is our future work. AcknowledgmentsThis work was sponsored, in part, by the research startup funding of Sichuan University, China, under contract #2082204164059, and by the Science and Technology Support Program of Sichuan Province, China, under contract #2014GZ0005. ReferencesK. Liuet al.,
“Gamma model and its analysis for phase measuring profilometry,”
J. Opt. Soc. Am. A, 27
(3), 553
–562
(2010). http://dx.doi.org/10.1364/JOSAA.27.000553 JOAOD6 0740-3232 Google Scholar
Z. WangD. A. NguyenJ. C. Barnes,
“Some practical considerations in fringe projection profilometry,”
Opt. Laser Eng., 48
(2), 218
–225
(2010). http://dx.doi.org/10.1016/j.optlaseng.2009.06.005 OLENDN 0143-8166 Google Scholar
H. GuoH. HeM. Chen,
“Gamma correction for digital fringe projection profilometry,”
Appl. Opt., 43
(14), 2906
–2914
(2004). http://dx.doi.org/10.1364/AO.43.002906 APOPAI 0003-6935 Google Scholar
X. Suet al.,
“Automated phase-measuring profilometry using defocused projection of a ronchi grating,”
Opt. Commun., 94
(6), 561
–573
(1992). http://dx.doi.org/10.1016/0030-4018(92)90606-R OPCOB8 0030-4018 Google Scholar
Y. Xuet al.,
“Phase error compensation for three-dimensional shape measurement with projector defocusing,”
Appl. Opt., 50
(17), 2572
–2581
(2011). http://dx.doi.org/10.1364/AO.50.00257 OLENDN 0143-8166 Google Scholar
S. ZhangP. S. Huang,
“Phase error compensation for a 3-d shape measurement system based on the phase-shifting method,”
Opt. Eng., 46
(6), 063601
(2007). http://dx.doi.org/10.1117/1.2746814 OPEGAR 0091-3286 Google Scholar
S. ZhangS. Yau,
“Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,”
Appl. Opt., 46
(1), 36
–43
(2007). http://dx.doi.org/10.1364/AO.46.000036 APOPAI 0003-6935 Google Scholar
|