## 1.

## Introduction

In many industries, online control within the manufacturing process is needed to optimize quality and production efficiency. Conventional contact measurement methods are usually slow and invasive, which means that they cannot be used for soft materials and for complex shapes without influencing the controlled parts. An alternative is a noncontact measurement by optical methods like digital holography. With digital holography, high resolution and precise three-dimensional (3-D) images of the manufactured parts can be generated. This technique can also be used to capture data in a single exposure, which is important when doing measurements in a disturbed environment.^{1} Recently, a method has been suggested where knowledge of the object shape, e.g., a CAD model, is used to reduce the number of required images for shape measurement in digital holography.^{2}^{,}^{3} The main idea is to perform online process control of manufactured objects by measuring the shape and comparing it with the CAD model. The method presented in this paper is also intended for such use.

In digital holography, access to the complex wave field and the possibility to numerically reconstruct holograms in different planes^{4}^{,}^{5} introduce a new degree of flexibility to optical metrology. From a single-recorded hologram, images with amplitude and phase information for different focal distances can be calculated by numerical propagation of the optical field. By recording two or more holograms with different illumination wavelengths, a phase map can be determined for all these focal distances.^{5} These phase maps will correspond to a measurement with a synthetic^{6} or equivalent wavelength Λ, calculated as

^{7}is another well-known phase extraction technique for shape measurement. In this method, the intensity distribution is measured at different focal planes, and the transport of intensity equation is utilized to reconstruct the phase distribution.

^{8}

^{,}

^{9}In contrast to digital holography, the technique needs no reference wave during the recording process but instead requires two or more exposures.

In a technique known as a digital speckle correlation, the analysis of speckle displacement by correlation techniques is used for measurements of, e.g., deformation, displacement, and strain. It has been shown that this technique provides an effective nondestructive, optical measurement, and characterization tool.^{10}11.12.^{–}^{13} The idea is to utilize the speckle pattern that appears when a coherent beam illuminates a rough surface. A change of the wavelength or a deformation of the object or a change of the microstructural distribution of the surface will cause the speckle pattern to change.^{11}^{,}^{14} These changes may appear as speckle decorrelation, movements of the speckle pattern, and a change in the phase of the speckles. In general, all these changes appear simultaneously. Detailed information on how to calculate the correlation properties of dynamic speckles is given in Refs. 10, 11, and 14. A few years ago, Yamaguchi et al.^{15} used the peak position of the correlation value in reconstruction distance to calculate the shape of the object. The idea thus utilized is that speckles in a defocused plane tend to move when changes in the wavelength set-up are introduced. The correlation function is, however, broad in the depth direction, and accuracy with that approach is limited. In this paper, a similar approach is taken, but instead of calculating the shape from the maximum of the correlation function in depth direction, the shape gradient of the object is calculated from the speckle movements at two different focal planes caused by a change in wavelength.

In summary, our approach uses digital holography to measure the phase distribution of the light, and can then by postprocessing and numerical propagation generate the intensity distribution in as many different focal planes as necessary. By using image correlation and speckle movement, our method is also robust to large phase gradients and large movements within the intensity patterns. The advantage of our approach is that, using speckle movement, we reached the shape measurement even when the synthetic wavelength is out of dynamic range of the height.

In Sec. 2, it is shown how the speckle movement and wavelength shift relates to the angle of the local surface normal. In Sec. 3, experimental results are presented, and the object shape of a smooth object is determined by integration of the shape gradients. The technique is demonstrated by a measurement on a cylindrical object with a trace milled off.

## 2.

## Theory

Holograms can be recorded either with the reference wave in parallel with the object light or tilted at an angle. These arrangements are called in-line holography^{16} and off-axis holography,^{17}^{,}^{18} respectively. The in-line holography is often used to image and localize particles in microscopy, while the off-axis holography is used to simplify the signal processing and in situations where only a single exposure can be used. In case of digital holography, the off-axis geometry introduces a carrier to provide a simple way to filter out the information, and that is the technique used in this paper. Consider Fig. 1 where LD is a tunable-laser diode, R the path of the reference light, FP the focus plane, and object space (OP) the position of zero phase difference between reference and object light, respectively. EnP and ExP defines the entrance pupil and the exit pupil of the imaging system, respectively. Define ${U}_{o}(x,y)={A}_{o}(x,y)\mathrm{exp}[i{\varphi}_{o}(x,y)]$ as the object wave and ${U}_{r}(x,y)={A}_{r}(x,y)\mathrm{exp}[i{\varphi}_{\mathrm{r}}(x,y)]$ as the reference wave in the detector plane. The recorded image can then be represented by

## (2)

$$I(x,y)={A}_{o}^{2}(x,y)+{A}_{r}^{2}+{A}_{r}{A}_{o}(x,y)\mathrm{exp}(i\mathrm{\Delta}\varphi )+{A}_{r}{A}_{o}(x,y)\mathrm{exp}(-i\mathrm{\Delta}\varphi ),$$^{19}in the off-axis arrangement.

Considering the third term $J(x,y)={U}_{o}(x,y){U}_{r}^{*}(x,y)={A}_{r}{A}_{o}\mathrm{exp}(i\mathrm{\Delta}\varphi )$ in Eq. (2), a modified version, ${\u016c}_{o}(\mathrm{x},\mathrm{y})$, of the object wave is retrieved as

where ${\u016c}_{r}(\mathrm{x},\mathrm{y})$ only contains the variation in phase over the detector caused by the tilt angle and the curvature of the field. In that way, a reference plane is defined in OP where the phase difference between the object and reference waves is zero. The complex optical field ${\u016c}_{o}$ can then be used for numerical refocusing.^{4}In this process, it is important that a constant magnification is kept.

If two fields ${\u016c}_{1}(x,y;{\lambda}_{1})$ and ${\u016c}_{2}(x,y;{\lambda}_{2})$, recorded with different wavelengths are retrieved, it is important to know their correlation properties, which is the purpose of the coming section. Consider Fig. 2(a) and 2(b). A diffuse plane surface is illuminated from a monochromatic point source ${P}_{s}({\mathbf{x}}_{s})$ located at ${\mathbf{x}}_{s}$ [Fig. 2(a)]. If position ${\mathbf{x}}_{\perp}$ defines a general scattering point on the surface, the plane wave component that illuminates the scattering point will propagate in direction ${\mathbf{s}}_{s}=({\mathbf{x}}_{\perp}-{\mathbf{x}}_{s})/{L}_{s}$, where ${L}_{s}=|{\mathbf{x}}_{\perp}-{\mathbf{x}}_{s}|$ is the length between the source and the scattering point, and the directional vector ${\mathbf{s}}_{s}$ points from the source to the scattering point. The random wavelet contributions from a surface path $\mathrm{\Sigma}$ on the object surface produce a field detected in point ${P}_{p}({\mathbf{x}}_{p})$ at position ${\mathbf{x}}_{p}$ in front of the surface. It is assumed that $\mathrm{\Sigma}$ is much smaller than the illuminated surface, and that the point ${P}_{p}$ is in a plane conjugate to a detection plane of the imaging system called focus plane, as shown in Fig. 2(b). We consider ${I}_{o}$ as a constant intensity on the surface, and ${\mathbf{s}}_{p}=({\mathbf{x}}_{p}-{\mathbf{x}}_{\perp})/{L}_{p}$ as a directional vector from the scattering point toward the detection point, where ${L}_{p}=|{\mathbf{x}}_{p}-{\mathbf{x}}_{\perp}|$ is the length between the scattering and the detection points. Thus, the total length passed by the wave is $L={L}_{s}+{L}_{p}$, and the accumulated phase will be $\varphi (k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=kL$, where $k=2\pi /\lambda $ is wave number. In the following, the vector $\mathbf{m}({\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})={\mathbf{s}}_{p}({\mathbf{x}}_{\perp},{\mathbf{x}}_{p})-{\mathbf{s}}_{s}({\mathbf{x}}_{\perp},{\mathbf{x}}_{s})$ known as the sensitivity vector of a surface point will be of importance.

The response of a speckle field due to a general change is given in Ref. 14. Here, we will only consider the response due to a change in wavelength. Though the absolute phase difference will be

## (4)

$$\mathrm{\Delta}{\varphi}_{a}=\mathrm{\Delta}kL({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s}),$$## (5)

$$\mathrm{\Delta}{\varphi}_{d}=\frac{k}{\mathrm{\Delta}L({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})}{\mathbf{x}}_{\epsilon}\xb7\mathrm{\Delta}{\mathbf{x}}_{\sum _{}^{}}-\mathrm{\Delta}k{\mathbf{x}}_{\epsilon}\xb7{\mathbf{m}}_{\sum _{}^{}}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\phantom{\rule{0ex}{0ex}}=\frac{k}{\mathrm{\Delta}L({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})}{\mathbf{x}}_{\epsilon}\xb7[\mathrm{\Delta}{\mathbf{x}}_{\sum _{}^{}}\phantom{\rule{0ex}{0ex}}-\mathrm{\Delta}L({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p}){\mathbf{m}}_{\sum _{}^{}}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\frac{\mathrm{\Delta}k}{k}]\phantom{\rule{0ex}{0ex}}=\frac{k}{\mathrm{\Delta}L({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})}{\mathbf{x}}_{\epsilon}\xb7(\mathrm{\Delta}\mathbf{x}-\mathbf{A}),$$## (6)

$$\mathbf{A}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=\mathrm{\Delta}L({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})\frac{{\mathbf{m}}_{\sum _{}^{}}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})}{\mathrm{cos}\text{\hspace{0.17em}}{\theta}_{\widehat{X}}}\left(\frac{\mathrm{\Delta}k}{k}\right),$$Equation (6) calls for some clarifications. First of all, the speckle movement vector $\mathbf{A}$ is the projection of the speckle movement in the conjugate plane of the detector (perpendicular with the optical axis). The vector ${\mathbf{m}}_{\mathrm{\Sigma}}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{\mathrm{p}},{\mathbf{x}}_{\mathrm{s}})$ appearing in Eq. (6) is the projection of the sensitivity vector $\mathbf{m}$ onto the local surface patch $\mathrm{\Sigma}$, and gives a vector that is perpendicular to the surface normal vector $\mathbf{n}$. Hence, the speckle movement is related to the gradient of the surface. The magnitude of ${\mathbf{m}}_{\mathrm{\Sigma}}$ gives the magnitude with which the speckle movement is geared, and its direction gives the direction in which the speckles move. The scaling parameter $\mathrm{cos}\text{\hspace{0.17em}}{\theta}_{\widehat{X}}$ relates the orientation of the detector to the surface patch, where ${\theta}_{\widehat{X}}$ is the angle between $\mathrm{\Delta}\mathbf{x}$ and its projection $\mathrm{\Delta}{\mathbf{x}}_{\mathrm{\Sigma}}$ on to $\mathrm{\Sigma}$.

If an experimental set-up is used such that $\mathbf{m}={\mathbf{s}}_{p}-{\mathbf{s}}_{s}\approx 2$ $\widehat{z}$, where $\widehat{z}$ is a unit vector along the optical axis, and speckle movement in the image plane is considered, we, therefore, expect only surface variations in the horizontal plane. Then ${\mathbf{m}}_{\mathrm{\Sigma}}=\mathbf{m}\text{\hspace{0.17em}}\mathrm{sin}{\theta}_{\hat{x}}$, and Eq. (6) is simplified to

## (7)

$$\mathbf{A}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=2M\mathrm{\Delta}L({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})\frac{\mathrm{\Delta}k}{k}\mathrm{tan}\text{\hspace{0.17em}}\theta ,$$Propagating the image to different planes causes different speckle movement results as a result of wavelength shift. Difference in speckle movement value $\mathrm{\Delta}\mathbf{A}$ for the different propagated planes can be calculated by Eq. (7). Then, $\mathrm{\Delta}L$ will be the distance between two different propagated planes. If $\mathrm{\Delta}\mathbf{A}$ is multiplied by the scaling parameter $k/(2M\mathrm{\Delta}L\mathrm{\Delta}k)$, it equals the local phase gradient at the object surface. By solving for $\theta $, the surface normal and the slope of the object shape can be calculated from the speckle movements and change in wavelength.

## 3.

## Experiments and Results

Consider the experimental set-up^{14} seen in Fig. 1. A laser diode LD (SANYO) illuminates an object with local surface normal $\mathbf{n}$ along a direction ${\mathbf{s}}_{s}$. The temperature of the laser diode can be controlled in order to tune the wavelength. The object is a half cylinder with a diameter of 48 mm, which was cut to have a measurement object of height 15 mm, as shown in Fig. 3. The cylindrical object had a trace of depth 1 mm and a width of 10 mm milled off. The cylinder was manufactured with an accuracy of 10 *μ*m. The object is imaged by an optical system along a direction ${\mathbf{s}}_{p}$ onto a digital camera (Sony XCL 5005) with a resolution of $2456\times 2058$ pixels, a pixel size of $3.45\times 3.45\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mu \text{m}$, a dynamic range of 12 bits, and an output frame rate of 15 Hz. Part of the light from the laser diode is decoupled and redirected toward the detector with an angle of $\sim 10\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{deg}$ with respect to the direction of the object light. From the detector plane the reference light is seen to originate from a point on the exit pupil plane outside the opening. The off-axis method is therefore utilized, and the complex amplitude field filtered out. Using this method, our numerical aperture is restricted to ${\mathrm{NA}}_{1}<\lambda /8a$, where $a$ is the sampling pitch on the detector. The zero optical path difference plane OP between the object light and reference light is indicated in Fig. 1. FP is the plane in which the optical system is focused during the measurement. Only the section of $11.6\times 11.6\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathrm{mm}}^{2}$ indicated by a black square in Fig. 3 is measured, which includes both the trace and a cylindrical part. In these measurements, approximately collimated illumination is used, and the sensitivity vector is $\mathbf{m}={\mathrm{s}}_{p}-{\mathrm{s}}_{s}\approx 2$ $\widehat{z}$, where $\widehat{z}$ is a unit vector along the optical axis of the imaging. Thus, constant phase is expected over planes perpendicular with the optical axis. The part of the section that includes the cylindrical shape has a surface normal that varies in the horizontal plane. Therefore, the speckle movement will vary only in the direction of the horizontal plane and change the sign on either side of the central direction.

Two holograms, one with a temperature of the laser diode of 15°C corresponding to a wavelength of 647.0 nm and other with a temperature of 19°C corresponding to 649.4 nm are acquired with FP coinciding with the surface of the trace. Controlling the wavelengths by temperature changes are straight forward but slightly inaccurate and may vary a few tenths of a nanometer.

The complex amplitudes from the recorded holograms were acquired and used to calculate two more sets of complex amplitudes using numerical re-focusing. One field is set 10 mm behind the object and the other 10 mm in front of the object. We, hence, have the set $\{{U}_{1-},{U}_{2-},{U}_{10},{U}_{20},{U}_{1+},\phantom{\rule{0ex}{0ex}}{U}_{2+}\}$ of six complex amplitudes acquired at two wavelengths (denoted 1/2) and originating from three planes (denoted $-/0/+$), which will be used to acquire information about the shape of the object. The wrapped phase of $\langle {\u016c}_{1}^{*}{\u016c}_{2}\rangle $ in the three planes is shown in the upper row of Fig. 4. Moving from left to right, these phase maps relate to a plane behind the object, on the object, and in front of the object, respectively. It is worth mentioning that the position of the focus plane plays a crucial role in regions with a steep slope, while the quality of the fringes is reasonably unaffected by the position of the focus plane in regions where the sensitivity vector is roughly parallel with the surface normal. This is clear from the sudden degradation of the fringe contrast as a result of going from the trace out to the cylindrical part in the upper right part of the images.

The lower row of Fig. 4 shows the corresponding speckle movement obtained by calculating $\langle \mathrm{\Delta}{I}_{1}\mathrm{\Delta}{I}_{2}\rangle $ using speckle correlation (digital speckle photography) in the three planes, respectively. At the part of the cylinder where the surface normal is parallel with the optical axis and on the trace part, the speckle movement is always close to zero. However, the speckle movement magnitude shows an increase toward the right when defocus is introduced. This result is in accordance with the theoretical relation given in Eq. (7). At the upper plane part and at the left part of the cylinder, the surface normal is (almost) parallel with the optical axis which gives a $\theta $ close to zero. Hence, $\mathbf{A}({\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})$ will be close to zero as is seen in Fig. 4. We also note that the sign of the speckle motion changes as the focus plane changes the side of the object. In front of the object, the speckles move toward the left, while they move to the right behind the object. As shown by Yamaguchi et al.,^{15} this is an effect that may be utilized to locate the position of the object surface. The technique thus obtained is very similar to the technique of determining shape from the projection of a random pattern.^{20}

To study the relation between the speckle movements and the focus distance $\mathrm{\Delta}L$, 20 more sets of complex fields were calculated by numerical re-focusing, in the range from 10 mm behind the object to 10 mm in front of the object. Speckle movements along a vertical line at $x$-position 4.7 mm (line a in Fig. 3) were calculated. The movements at the flat part and at the cylindrical part of the object were plotted versus $\mathrm{\Delta}L$ (see Fig. 5). Note that the flat part with $\theta $ close to zero gives small movements. For the cylinder, the slope is approximately $\theta =10\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{deg}$, and the theoretical speckle ovement was calculated from Eq. (7) and plotted as a line. As can be seen in Fig. 5, the experimental results match the theoretical line. The standard deviation of the experimentally measured speckle movements at the cylindrical part was less than 0.7 *μ*m corresponding to 0.2 pixels. We may compare this result with the theoretical expression for the accuracy,^{21}

If $\mathrm{\Delta}\mathbf{A}$ is multiplied by the scaling parameter $k/(2M\mathrm{\Delta}L\mathrm{\Delta}k)$, it equals the local phase gradient at the object surface. These phase gradients may be integrated, for example using cumulative trapezoidal numerical integration to get the phase of the measurement. As seen in Fig. 6(a), this results in an unwrapped phase map that equals the interferometric phase difference $\mathrm{\Delta}{\varphi}_{a}$ up to a constant. Figure 6(b) shows the unwrapped phase obtained from the middle image of Fig. 4. For the unwrapping, the technique by Volkov and Zhu was used.^{22} As seen, the results are almost identical. These phase maps may then be transformed to shape by simple scaling.

Figure 7 shows the measured shape along the two rows that are defined by b and c in Fig. 3 (dashed at the flat part and line at the cylindrical part). The theoretical shape of the cylinder is plotted as well (dotted). It is seen that the measured shape corresponds to the theoretical except for a linear scaling factor. This scaling can be due to an unprecise reading of the wavelength or that the light was not precisely collimated. If the wavelength shift is adjusted with as little as 0.5 nm, the measurement matches the theoretical shape perfectly. For the proposed method, it is necessary to have either lasers with well-defined wavelengths or the possibility to measure the wavelengths accurately. To estimate the accuracy of the measurement, we plot the difference between the theoretical shape and the measured one with the adjusted wavelength (see Fig. 8). The standard deviation of this difference is 4.2 *μ*m. In Fig. 9, a 3-D display of the measured object shape from the speckle movements is shown.

The use of measured surface slopes to estimate shape is not unique to the method presented here. In fact, it may be compared with other methods that give the surface slope such as photometric stereo and deflectometric methods.^{23}24.^{–}^{25} These are well known for precise measurement of small local shapes due to their derivative nature. By using a digital holographic recording and numerical propagation, the proposed technique only requires two images and no mechanical movements. As both interferometric data and the speckle movements are obtained from the same recording, these can be combined to achieve even better results.

## 4.

## Conclusion

Holographic contouring is a very precise measurement method, but is based on the extraction of phase by direct comparison of speckle patterns, and hence is sensitive to speckle decorrelation and speckle movements. In this paper, these speckle movements are utilized to calculate the shape. The theoretical relation between the object surface normal and the speckle movements have been presented and result in a linear relation between surface slope and defocus. It has also been experimentally shown how measurements of speckle movements can be used to calculate the phase distribution and the object shape. By using holographic recordings, the re-focusing can be done numerically and without any mechanical movements, which ideally means that only one recording needs to be acquired. From a measurement on a cylindrical test object, it was shown that the measurement accuracy is in the order of a few micrometers.

## Acknowledgments

This research is financially supported by VINNOVA (the Swedish Governmental Agency for Innovation) and was a part of the HOLOPRO project. The authors would also like to thank Dr. Henrik Lycksam, Dr. Erik Olsson, and Dr. Per Gren for their valuable discussions.

## References

## Biography

**Davood Khodadad** received his BS and MS degrees in bioelectrical engineering from Sahand University of Technology, Tabriz, and Tehran University of Medical Sciences, Tehran, in 2008 and 2011, respectively. He is currently active as a PhD student in the Division of Experimental Mechanics at Luleå University of Technology, Sweden. His research interests include noncontact optical metrology, imaging and image formation, and signal and image processing. His research is currently focused on development of pulsed multispectral digital holography for three-dimensional imaging.

**Emil Hällstig** received his PhD in physics engineering at Uppsala University, Sweden, in 2004. The work was done at the Swedish Defence Research Agency (FOI), and included active optics and especially nonmechanical laser beam steering for a novel free-space optical link. He has, since 2000, also worked at the company Optronic as optical specialist and project manager, and for the last 10 years has been responsible for the research activities at Optronic. He also has held a position as guest researcher at Luleå University of Technology since 2004, and the research focuses on optical metrology and digital holography.

**Mikael Sjödahl** received his MSc in mechanical engineering and his PhD in experimental mechanics from the Lulea University of Technology, Sweden, in 1989 and 1995, respectively. He is currently holding the chair of experimental mechanics at the Lulea University of Technology and a professorship at University West, Sweden. He has authored or coauthored over 100 papers in international journals and contributed to two books. His interests include fundamental speckle behavior, coherent optical metrology, nondestructive testing, and multidimensional signal processing.