## 1.

## Introduction

Three-dimensional shape sensing plays an important role in machine vision, reversed engineering, automatic manufacturing, and other industrial applications. The use of a full-field technique, such as stereovision,^{1, 2} fringe projection,^{3, 4} and structured light illumination,^{5, 6} has been recognized as a promising method for the measurement of a surface profile.

One of the major studies for 3-D sensing is the measurement of dynamic objects. For objects with fast-moving speeds, 3-D shape measurements always require that the observed images not be blurred by motion. High-speed cameras or stroboscopic illuminations are commonly used to obtain unblurred images. Unfortunately, when the speed of the objects still exceeds the temporal resolution of the sensor, the image is blurred. Of course, an ultra-short laser pulse can still be used to freeze the motion on the image. However, the illumination intensity might not be sufficient to perform large-scale measurements, and the cost of such light sources is generally high.

In this paper, we show that the projected fringe profilometry^{4} does not need to avoid the blurred images. As a typical setup, we use a fringe pattern to illuminate the dynamic object and utilize a CCD camera to record the fringe distribution. Fringes on the obtained image are deformed by the topography of the object and also blurred by motion. Theoretical analysis shows that objects moving within one period of the projected fringe can be directly described by the projected fringe profilometry. Thus, the cost of the detection system is effectively reduced.

## 2.

## Theoretical Analysis

Figure 1 shows the system configuration. The $x\text{-}z$ plane is located in the figure plane, and the $y$ axis is normal to the figure plane. A fringe pattern is projected onto the inspected surface. Intensity of the fringes when propagating in space is represented as

## Eq. 1

$${I}_{f}(x,z)=a+b\phantom{\rule{0.2em}{0ex}}\mathrm{cos}(\frac{2\pi x}{{T}_{x}}+\frac{2\pi z}{{T}_{z}}),$$## Eq. 2

$${I}_{r}(x,y)=aR(x,y)+bR(x,y)\mathrm{cos}[\frac{2\pi x}{{T}_{x}}+\frac{2\pi Z(x,y)}{{T}_{z}}],$$The projected fringes on the surface are observed by the image sensor array. The detection plane coordinate system $(r,c)$ is defined in the CCD detection plane with $r$ and $c$ axes parallel to the row and the column directions of the sensor array, respectively. The gray level on the recorded image corresponding to ${I}_{r}(x,y)$ is described as

where ${A}_{1}(r,c)$ is the background or dc gray level, ${B}_{1}(r,c)$ is the modulation amplitude, and ${\phi}_{Z}(r,c)$ is the measured absolute phase. For a telecentric system, the mapping transformation between the image plane and $x\text{-}y$ plane iswhere $M$ is the magnitude of the telecentric lens. A phase value sampled at an object point is assumed equal to that sampled at its image point. This assumption applies when the point spread function of the system is symmetric (coma-free). Thus, Eq. 3 can then be rewritten as## Eq. 5

$$I(Mx,My)={A}_{1}(r,c)+{B}_{1}(r,c)\mathrm{cos}\phantom{\rule{0.2em}{0ex}}{\phi}_{Z}(r,c)=KaR(x,y)+KbR(x,y)\mathrm{cos}[\frac{2\pi x}{{T}_{x}}+\frac{2\pi Z(x,y)}{{T}_{z}}],$$
${\phi}_{Z}(r,c)$
can be extracted with the phase-shifting technique or Fourier transform method. Since both phase evaluation techniques involve the arctangent operation, the extracted phases have discontinuities with
$2\pi $
phase jumps. Unwrapping is inevitable to recover the absolute phases.^{7} Once the unwrapped phase
${\phi}_{Z}(r,c)$
is obtained, depth on the surface point can be directly found from Eq. 5, as given by

Now, consider this inspected object moving with speed $({\upsilon}_{x},{\upsilon}_{y},{\upsilon}_{z})$ in the world coordinates. Its depth profile is a function of time and is given by

## Eq. 7

$$Z(x,y,t)={Z}_{o}(x,y)+\nabla {Z}_{o}(x,y)\cdot (\widehat{x}{\upsilon}_{x}+\widehat{y}{\upsilon}_{y})t+{\upsilon}_{z}t={Z}_{o}(x,y)+[{\upsilon}_{x}\frac{\partial {Z}_{o}(x,y)}{\partial x}+{\upsilon}_{y}\frac{\partial {Z}_{o}(x,y)}{\partial y}+{\upsilon}_{z}]t,$$The image sensor array obtains a blurred image within the exposure time $\Delta t$ . The gray level of the blurred image with reference to Eq. 5 can be expressed as

## Eq. 8

$${I}_{\text{blurred}}(Mx,My)=A(r,c)+B(r,c)\mathrm{cos}\phantom{\rule{0.2em}{0ex}}{\phi}_{\text{blurred}}(r,c)={\int}_{t=0}^{t=\Delta t}\{KaR(x-{\upsilon}_{x}t,y-{\upsilon}_{y}t)+KbR(x-{\upsilon}_{x}t,y-{\upsilon}_{y}t)\mathrm{cos}[\frac{2\pi x}{{T}_{x}}+\frac{2\pi}{{T}_{z}}Z(x,y,t)]\}\mathrm{d}t,$$For objects in which $R(x,y)$ varies slowly with $x$ and $y$ , Eq. 8 can be represented as

## Eq. 9

$${I}_{\text{blurred}}(Mx,My)=K{\int}_{t=0}^{t=\Delta t}\{aR(x,y)+bR(x,y)\mathrm{cos}[\frac{2\pi x}{{T}_{x}}+\frac{2\pi}{{T}_{z}}Z(x,y,t)]\}\mathrm{d}t.$$## Eq. 10

$${I}_{\text{blurred}}(Mx,My)=KaR(x,y)\Delta t+KbR(x,y)\Delta t\cdot \mathrm{sinc}\left(\frac{\alpha \Delta t}{{T}_{z}}\right)\mathrm{cos}\{\frac{2\pi x}{{T}_{x}}+\frac{2\pi}{{T}_{z}}[{Z}_{o}(x,y)+\alpha \Delta t\u22152]\},$$According to Eq. 7, the depth profile at $t=\Delta t\u22152$ can be expressed as

and therefore Eq. 10 is represented as## Eq. 12

$${I}_{\text{blurred}}(Mx,My)=KaR(x,y)\Delta t+KbR(x,y)\Delta t\cdot \mathrm{sinc}\left(\frac{\alpha \Delta t}{{T}_{z}}\right)\mathrm{cos}[\frac{2\pi x}{{T}_{x}}+\frac{2\pi}{{T}_{z}}{Z}_{1}(x,y)].$$## 3.

## Experiments

A ball with moving speed ${\upsilon}_{x}=0.62\phantom{\rule{0.3em}{0ex}}\mathrm{mm}\u2215\mathrm{s}$ , ${\upsilon}_{y}=0.62\phantom{\rule{0.3em}{0ex}}\mathrm{mm}\u2215\mathrm{s}$ , and ${\upsilon}_{z}=0.13\phantom{\rule{0.3em}{0ex}}\mathrm{mm}\u2215\mathrm{s}$ was chosen as the dynamic sample. Its diameter was approximately $40\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$ . A sinusoidal fringe pattern was illuminated by a halogen lamp and then projected onto this dynamic sample. A CCD camera with $1024\times 1024\phantom{\rule{0.3em}{0ex}}\text{pixels}$ at $12\text{-}\text{bit}$ pixel resolution was used to record the fringe distribution. Fringes were blurred by linear motion. Figure 2 shows the fringe distribution, in which the exposure time was $4.0\phantom{\rule{0.3em}{0ex}}\mathrm{s}$ .

Phase-extraction was performed with the Fourier transform method.^{3} Figure 2 shows the computed phase
${\phi}_{\mathit{\text{blurred}}}$
, which was within the interval between
$-\pi $
and
$\pi $
. Unwrapping was a necessary procedure to eliminate the discontinuities. In our experiment, we use Goldstein’s algorithm^{7} to restore the absolute phases. With Eq. 13, depth profile
${Z}_{1}(x,y)$
was determined. Figure 3 shows the retrieved profile. Its 1-D profile is shown in Fig. 3.

A comparison when the sample was static was performed as well. Appearance of the projected fringes on the static sample is shown as Fig. 4. Equation 6 was employed to retrieve the 3-D shape. Figures 4 and 4 show the retrieved 3-D shape and its 1-D profile, respectively. Systematic accuracy for a static object was approximately $150\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ . The errors were mainly from the spatially sampling density of the CCD camera and phase-extraction. The sampling resolution was approximately $100\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ , which was determined by the field of view and the pixel numbers of the CCD camera.

The difference between one two profiles (the dynamic case and the static case) is depicted in Fig. 5, in which the shifting displacement has been compensated. It was found that accuracy at the center area of the sample could be achieved with the same order of the static case, implying that our theoretical analysis was correct. However, enormous errors occurred at the edge of the dynamic object.

There were two sources causing such errors. They were (1) variation of effective exposure time at the boundary area, and (2) ambiguity of phase-extraction for surfaces with large depth variation. The exposure time for image pixels that inspected the boundary area was unfortunately not a constant. It was highly dependent on the moving direction and the shape of the boundary. For example, a surface point on the boundary is observed by the image sensor array. Since this object is dynamic, the observed point is moving from point $A$ to point $B$ on the detection plane during the exposure time $\Delta t$ . As shown in Fig. 6, for a sensor pixel $C$ that is located within the interval between $A$ and $B$ , its effective exposure time is $\Delta t\cdot \overline{CB}\u2215\overline{AB}$ . The effective exposure time on the boundary area was therefore not $\Delta t$ . Equation 9 was not applicable for the boundary area. The example shown in Fig. 6 also indicates that the effective exposure time was corresponding to the shape of the boundary on the detection plane. Effective exposure time for pixel $D$ in Fig. 6 is different from that for pixel $E$ in Fig. 6.

Errors from ambiguity of phase-extraction occur when the shifting amount of the projected fringes is larger than their periods. A displacement of the dynamic object directly causes the projected fringes to shift from one surface point to another. If the shifting amount is equal to the fringe period, the fringe contrast becomes zero. This phenomenon can be mathematically described when the sinc function in Eq. 10 is equal to zero, or say, $\alpha \cdot \Delta t$ is equal to ${T}_{z}$ . In such a situation, phase cannot be identified on that area. Moreover, aliasing occur when the shifting amount is larger than the fringe period, i.e., $\alpha \cdot \Delta t>{T}_{z}$ . This directly causes a $2\pi $ phase offset when performing the phase unwrapping. Equation 10 for the aliasing area should be modified as

## Eq. 14

$${I}_{\text{blurred}}(Mx,My)=KaR(x,y)\Delta t+KbR(x,y)\Delta t\cdot \mathrm{sinc}\left(\frac{\alpha \Delta t}{{T}_{z}}\right)\mathrm{cos}\{\frac{2\pi x}{{T}_{x}}+\frac{2\pi}{{T}_{z}}[{Z}_{o}(x,y)+\alpha \Delta t\u22152]\pm 2\pi \},$$## Eq. 15

$${Z}_{1}(x,y)=\frac{{T}_{z}}{2\pi}[{\phi}_{\text{blurred}}(x,y)\pm 2\pi ]-\frac{{T}_{z}}{{T}_{x}}x.$$Since aliasing occurs with

Figure 9 shows the recorded blurred image when the sample was moving along the $z$ axis. The moving speed of the sample was $3.4\phantom{\rule{0.3em}{0ex}}\mathrm{mm}\u2215\mathrm{s}$ , and the exposure time was $1.0\phantom{\rule{0.3em}{0ex}}\mathrm{s}$ . The retrieved phase is shown in Fig. 9. Since ${\upsilon}_{x}$ and ${\upsilon}_{y}$ were zero, the value $\alpha $ was only a function of ${\upsilon}_{z}\Delta t$ . Fringe contrast over the whole image was therefore varied only with ${\upsilon}_{z}\Delta t$ , not with $x$ or $y$ . Phase extraction did not encounter any ambiguity. Sources of errors corresponding to various moving directions are reported in Table 1.

## Table 1

Sources of errors caused by the moving direction.

Movingvector | $\widehat{x}{\upsilon}_{x}$ | $\widehat{y}{\upsilon}_{y}$ | $\widehat{z}{\upsilon}_{z}$ | $\widehat{x}{\upsilon}_{x}+\widehat{y}{\upsilon}_{y}+\widehat{z}{\upsilon}_{z}$ |

Distributionof the zerofringecontrast | ${\upsilon}_{x}\frac{\partial {Z}_{0}(x,y)}{\partial x}\cdot \Delta t$ $={T}_{z}$ | ${\upsilon}_{y}\frac{\partial {Z}_{0}(x,y)}{\partial y}\cdot \Delta t$ $={T}_{g}$ | ${\upsilon}_{z}\cdot \Delta t={T}_{z}$ | $[{\upsilon}_{x}\frac{\partial {Z}_{0}(x,y)}{\partial x}+{\upsilon}_{y}\frac{\partial {Z}_{0}(x,y)}{\partial y}+{\upsilon}_{z}]\Delta t$ $={T}_{z}$ |

Area withphaseuncertainty | Aliasing area andarea with zerofringe contrast | Aliasing area andarea with zerofringe contrast | Area withzero fringecontrast | Aliasing area and area with zero fringecontrast |

Area withenormousmeasurementerrors | Edge area, aliasingarea, and area withzero fringe contrast | Edge area, aliasingarea, and area withzero fringe contrast | Edge areaand areawith zerofringecontrast | Edge area, aliasing area, and area withzero fringe contrast |

Systematic accuracy for a dynamic object can be illustrated as Fig. 10, in which a plate moving along the $z$ axis was inspected. The roughness of this plate was approximately $10\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ . The moving speed was $2.9\phantom{\rule{0.3em}{0ex}}\mathrm{mm}\u2215\mathrm{s}$ , and the exposure time was $1.0\phantom{\rule{0.3em}{0ex}}\mathrm{s}$ . A comparison was evaluated when this plate was static, as shown in Fig. 10. The retrieved 3-D shape for the dynamic case and the static case is depicted as Figs. 11 and 11, respectively. Even though the fringe contrast on the dynamic object was relatively low, its profile can be retrieved with accuracy as high as the static one.

Compared with other methods using deblurred algorithms to restore the observed information, the proposed method saves the computation time. For approaches using a high-speed camera or stroboscopic illuminations to freeze the object’s motion, the cost of the proposed system is relatively low. However, limitations are that the inspected object should be a rigid body, and this object should move linearly within one period of the projected fringes. If the displacement of the projected fringes shifts larger than one period of the fringes, aliasing will occur. In addition, errors also occur when the inspected object is rotating. Equation 9 is not applicable when the moving vector is time-dependent.

## 4.

## Conclusions

We have presented a discussion on how to retrieve the 3-D shape from an image blurred by motion. With the fringe projection method, objects moving within one period of the projected fringes can be fully described. Thus, it is not necessary to avoid blurred images. Accuracy can be achieved that is as high as with the static image. This makes it desirable to reduce the cost of the detection system. We believe that applications to microelectromechanical systems (MEMS) and biomedical inspections can be realized.

## References

**,” Image Vis. Comput., 9 27 –32 (1990). https://doi.org/10.1016/0262-8856(91)90045-Q 0262-8856 Google Scholar**

*Optimal combination of stereo camera calibration from arbitrary stereo images***,” Image Vis. Comput., 12 203 –212 (1994). https://doi.org/10.1016/0262-8856(94)90074-4 0262-8856 Google Scholar**

*Stretch-correlation as a real-time alternative to feature-based stereo matching algorithms***,” Appl. Opt., 22 3977 –3982 (1983). https://doi.org/10.1364/AO.22.003977 0003-6935 Google Scholar**

*Fourier transform profilometry for the automatic measurement of 3-D shaped object***,” Appl. Opt., 23 3105 –3108 (1984). https://doi.org/10.1364/AO.23.003105 0003-6935 Google Scholar**

*Automated phase-measuring profilometry of 3-D diffuse objects***,” Opt. Express, 15 13167 –13181 (2007). https://doi.org/10.1364/OE.15.013167 1094-4087 Google Scholar**

*Color-encoded fringe projection for 3D shape measurements***,” Opt. Express, 16 2590 –2596 (2008). https://doi.org/10.1364/OE.16.002590 1094-4087 Google Scholar**

*Projected fringe profilometry using the area-encoded algorithm for spatially isolated and dynamic objects***,” Opt. Lasers Eng., 46 106 –116 (2008). https://doi.org/10.1016/j.optlaseng.2007.09.002 0143-8166 Google Scholar**

*Comparison of eight unwrapping algorithms applied to Fourier-transform profilometry*## Biography

**Wei-Hung Su** is an assistant professor in the Department of Material Science and Optoelectronic Engineering at the National Sun Yat-Sen University, Taiwan. He earned a PhD degree and an MS degree in electrical engineering from Pennsylvania State University in 2002 and 1999, respectively. His professional interests are optical metrology, digital image processing, and optical information processing.

**Chao-Kuei Lee** received his PhD in electro-optical engineering from National Chiao Tung University, Taiwan, in 2003. He is currently an assistant professor who directs the Laboratory of Femtosecond and Quantum Modulation with the Institute of Electro-Optical Engineering in National Sun Yat-sen University. His research interests include femotosecond light sources, ultrafat optoelectronics, and coherent quantum control.