## 1.

## Introduction

In a way, speckles are the ultimate manifestation of coherence in a wave field, as they show up (become visible) only in coherent fields. In many situations, the appearance of speckles is a problem, as the underlying intensity contrast is deteriorated but speckles are actually needed to carry coherent information in a random wave field. In this article, only speckles produced from reflection are considered, but they appear everywhere coherent scattering is present. The geometry of this problem is shown in Fig. 1. A monochromatic beam propagates from a source located at ${P}_{s}$ to the surface $\mathrm{\Sigma}$, where it is reflected toward the detection point ${P}_{p}$. If, after reflection, the amplitude and phase of the beam is known, it will be perfectly deterministic and the field in ${P}_{p}$ can be solved using the diffraction integral. If we now add a phase screen that is completely random, a complete set of spatial frequencies is generated, including both evanescent and homogeneous components. We will also generate different polarization components, but we disregard that effect from now on. As a complete set of spatial frequencies are generated, we would expect the propagating field reflected off the phase screen to be a diffuse field, not a beam any longer. What we have constructed is a diffusor. When viewed from any direction, the wave would appear to originate from the random phase screen, not from the source generating the wave (such as a light source). Therefore, we may call the diffusor a secondary source and call the actual source a primary source. It is this property of producing a complete set of spatial frequencies that is utilized in optical imaging metrology. However, as the diffraction integral now involves random components that are added together, the result is the random field called a speckle pattern, the static properties of which have been analyzed extensively by Goodman^{1} and others.

In metrology, the change in a given speckle pattern because of a change in any of the generating variables is generally of primary interest. For example, if we change the wavelength of the coherent beam, deform the object, or somehow change the microstructural distribution of scatterers, the phase, position, and microstructure of the speckle pattern change. It is this phase change, speckle movement, or decorrelation that is utilized in metrology. Typical techniques are referred to as speckle interferometry,^{2} speckle or image correlation,^{3}^{,}^{4} and dynamic light scattering.^{5} Theoretically, the expected response in a speckle pattern due to some change in the system may be analyzed by calculating the modified mutual coherence of the field. The most notable contribution to this is from Yamaguchi,^{6}7.^{–}^{8} who has analyzed the effects of a surface deformation and wavelength shift over a plane object. From the results by Yamaguchi and others, it is concluded that speckles in a free-space geometry behave much like a grating and that the movements are generated by relative phase changes over the surface patch of integration. In Sec. 2.1, the modified mutual coherence function is generalized to include variations in wave number, object deformation, and object orientation in a free-space geometry. The term *multispectral speckles* is used to emphasize the fact that two (or more) distinct wavelengths are used, as opposed to a broadband source. Hence, the correlation properties between two fully developed speckle patterns are analyzed and used. In Sec. 2.2, the results from the analysis of the free-space geometry is used to analyze the behavior of speckle in an imaging system, and in Sec. 2.3, the effect of adding a smooth reference wave is included. Special attention is given to the sensitivity of phase evaluation. Section 3 discusses the properties of the results that are often encountered in speckle metrology. The results are put into perspective in the final section of the article.

## 2.

## Correlation Properties of Dynamic Speckles

Our starting point will be the geometry sketched in Fig. 1. A monochromatic point source ${P}_{s}({\mathbf{x}}_{s})$ situated at position ${\mathbf{x}}_{s}$ illuminates a plane diffuse surface. In this article, I will limit the discussion to surface scattering, meaning that each photon has undergone only one scattering event. A general scattering point on this surface is defined by position ${\mathbf{x}}_{\perp}$ so that the plane wave component illuminating the scattering point propagates in direction ${\mathbf{s}}_{s}=({\mathbf{x}}_{\perp}-{\mathbf{x}}_{s})/{L}_{s}$, where ${L}_{s}=|{\mathbf{x}}_{\perp}-{\mathbf{x}}_{s}|$ is the length between the source and the scattering point and the directional vector ${\mathbf{s}}_{s}$ points from the source to the scattering point. The resulting field detected in point ${P}_{p}({\mathbf{x}}_{p})$ at position ${\mathbf{x}}_{p}$ in front of the field is the result of integrating the random contributions from a domain $\mathrm{\Sigma}$ on the surface defined by the solid angle $\mathrm{\Omega}$. I will assume $\mathrm{\Sigma}$ to be much smaller than the illuminated surface area as in an optical imaging system (e.g., where $\mathrm{\Omega}$ is limited by the numerical aperture of the imaging system). The intensity ${I}_{0}$ on the surface may hence be considered constant. The directional vector ${\mathbf{s}}_{p}=({\mathbf{x}}_{p}-{\mathbf{x}}_{\perp})/{L}_{p}$ points from the scattering point toward the detection point, where ${L}_{p}=|{\mathbf{x}}_{p}-{\mathbf{x}}_{\perp}|$ is the length between the scattering point and the detection point. Hence, the total length covered by a wave is $L={L}_{s}+{L}_{p}$ and the accumulated phase becomes $\varphi (k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=kL$, where the wave number $k=2\pi \nu /c$. By virtue of the diffraction integral, the field $U(k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})$ in point ${P}_{p}$ is given by

## (1)

$$U(k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=\sqrt{{I}_{0}}{\int}_{\mathrm{\Omega}}g({\mathbf{s}}_{\perp})\mathrm{exp}[i\varphi (k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})]{\mathrm{d}}^{2}{\mathbf{s}}_{\perp},$$## (2)

$${\mathrm{\Gamma}}_{12}(\mathrm{\Delta}\mathbf{x})=\langle {U}_{1}^{*}{U}_{2}\rangle ={I}_{0}{\gamma}_{12}{\int}_{\mathrm{\Sigma}}\mathrm{exp}[i\delta \varphi (k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})]{\mathrm{d}}^{2}{\mathbf{x}}_{\perp},$$^{9}In the following, I will not be concerned with the microscopic coherence function, as other effects often dominate. Equation (2) is the fundamental equation for this section, and this article shall analyze it in some detail.

## 2.1.

### Phase and Phase Gradients of Dynamic Speckles

The most important variable in Eq. (2) is the differential of the phase $\varphi (k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=kL=k[{L}_{s}({\mathbf{x}}_{\perp},{\mathbf{x}}_{s})+{L}_{p}({\mathbf{x}}_{\perp},{\mathbf{x}}_{p})]$, where the first three variables are allowed to vary. In the following, I will utilize the Taylor expansion to the first order to approximate a variation of the depending variables. For the vector variables, we need to calculate directional derivatives of the form $F(\mathbf{x}+\mathbf{v})\approx F(\mathbf{x})+\mathbf{v}\xb7\nabla F(\mathbf{x})$, where the last term gives the change in the function because of a small movement $\mathbf{v}$ from $\mathbf{x}$. The expression $\mathbf{v}\xb7\nabla $ produce a scalar differential operator that operates on the function $F(\mathbf{x})$ that may be either a scalar or a vector. In the following discussion, four type of expressions will appear:

## (3)

$$\mathbf{v}\xb7{\nabla}_{\mathbf{x}}L=\mathbf{v}\xb7\mathbf{s}=\mathbf{v}\xb7{\mathbf{s}}_{D},\phantom{\rule{0ex}{0ex}}\mathbf{v}\xb7{\nabla}_{\mathbf{x}}\mathbf{s}=\frac{\mathbf{v}-\mathbf{v}\xb7\mathbf{ss}}{L}=\frac{1}{L}[\mathbf{v}-\mathbf{v}\xb7{\mathbf{s}}_{D}\mathbf{s}],\phantom{\rule{0ex}{0ex}}\mathbf{v}\xb7{\nabla}_{\mathbf{x}}(\mathbf{a}\xb7\mathbf{b})=[\mathbf{v}\xb7{\nabla}_{\mathbf{x}}\mathbf{a}]\xb7\mathbf{b}+\mathbf{a}\xb7[\mathbf{v}\xb7{\nabla}_{\mathbf{x}}\mathbf{b}],\phantom{\rule{0ex}{0ex}}\mathbf{v}\xb7{\nabla}_{\mathbf{x}}\mathbf{a}(\mathbf{x})=\mathbf{v}\xb7{J}_{D}[\mathbf{a}(\mathbf{x})],$$## (4)

$$\delta \varphi (k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\approx \mathrm{\Delta}kL({\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\phantom{\rule{0ex}{0ex}}-k\mathbf{m}({\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\xb7\mathbf{a}({\mathbf{x}}_{\perp})+k{\mathbf{s}}_{p}({\mathbf{x}}_{\perp},{\mathbf{x}}_{p})\xb7\mathrm{\Delta}\mathbf{x},$$*sensitivity vector*of the setup. We see that the phase changes due to a change in the wave number in proportion to the distance traveled by the wave, but also due to an object point movement $\mathbf{a}({\mathbf{x}}_{\perp})$ in relation to the sensitivity vector and a change in detection point $\mathrm{\Delta}\mathbf{x}$ in relation to the observation point direction.

The next thing to consider is the integral over the surface patch $\mathrm{\Sigma}$ in Eq. (2). To handle that, I introduce the central position ${\mathbf{x}}_{\perp 0}$ within $\mathrm{\Sigma}$ and the local variable ${\mathbf{x}}_{\u03f5}$ confined to the surface patch so that ${\mathbf{x}}_{\perp}={\mathbf{x}}_{\perp 0}+{\mathbf{x}}_{\u03f5}$. By virtue of the Taylor expansion, I then get

## (5)

$$\delta \varphi (k,{\mathbf{x}}_{\perp},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\approx \delta \varphi (k,{\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})+{\mathbf{x}}_{\u03f5}\xb7{\nabla}_{{\mathbf{x}}_{\perp}}(\delta \varphi ),$$## (6)

$$\mathrm{\Delta}{\varphi}_{a}=\delta \varphi (k,{\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=\mathrm{\Delta}kL({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})-k\mathbf{m}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\xb7\mathbf{a}({\mathbf{x}}_{\perp 0})+k{\mathbf{s}}_{p}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})\xb7\mathrm{\Delta}\mathbf{x},$$## (7)

$$\mathrm{\Delta}{\varphi}_{d}=\frac{k}{{L}_{p}}{\mathbf{x}}_{\u03f5}\xb7\mathrm{\Delta}{\mathbf{x}}_{\mathrm{\Sigma}}-\mathrm{\Delta}k{\mathbf{x}}_{\u03f5}\xb7{\mathbf{m}}_{\mathrm{\Sigma}}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\phantom{\rule{0ex}{0ex}}-\frac{k}{{L}_{p}}[{\mathbf{x}}_{\u03f5}\xb7{\mathbf{a}}_{\mathrm{\Sigma}}({\mathbf{x}}_{\perp 0})-{\mathbf{x}}_{\u03f5}\xb7{\mathbf{s}}_{p\mathrm{\Sigma}}{\mathbf{a}}_{p}({\mathbf{x}}_{\perp 0})]\phantom{\rule{0ex}{0ex}}+\frac{k}{{L}_{s}}[{\mathbf{x}}_{\u03f5}\xb7{\mathbf{a}}_{\mathrm{\Sigma}}({\mathbf{x}}_{\perp 0})-{\mathbf{x}}_{\u03f5}\xb7{\mathbf{s}}_{s\mathrm{\Sigma}}{\mathbf{a}}_{s}({\mathbf{x}}_{\perp 0})]\phantom{\rule{0ex}{0ex}}-k{\mathbf{x}}_{\u03f5}\xb7\mathbf{m}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\xb7{J}_{\mathrm{\Sigma}}(\mathbf{a}),$$## (8)

$$\mathbf{A}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=[1+\frac{{L}_{p}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})}{{L}_{s}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{s})}]{\mathbf{a}}_{\mathbf{X}}\phantom{\rule{0ex}{0ex}}-{a}_{Z}[{\mathbf{s}}_{pX}-\frac{{L}_{p}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})}{{L}_{s}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{s})}{\mathbf{s}}_{sX}]\phantom{\rule{0ex}{0ex}}+\frac{{L}_{p}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p})}{\mathrm{cos}\text{\hspace{0.17em}}{\theta}_{\widehat{X}}}[{\mathbf{m}}_{\mathrm{\Sigma}}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\frac{\mathrm{\Delta}k}{k}\phantom{\rule{0ex}{0ex}}+\mathbf{m}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\xb7{J}_{\mathrm{\Sigma}}(\mathbf{a})]$$With the aid of the above, we may rewrite Eq. (2) as

## (9)

$${\mathrm{\Gamma}}_{12}(\mathrm{\Delta}\mathbf{x})={I}_{0}\text{\hspace{0.17em}}\mathrm{exp}[i\mathrm{\Delta}{\varphi}_{a}]{\gamma}_{12}{\gamma}_{s}(\mathrm{\Delta}\mathbf{x}),$$## (10)

$${\gamma}_{s}(\mathrm{\Delta}\mathbf{x})\phantom{\rule{0ex}{0ex}}={\int}_{\mathrm{\Sigma}}\mathrm{exp}[i\frac{k}{{L}_{p}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})}\{\mathrm{\Delta}\mathbf{x}-\mathbf{A}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\}\xb7{\mathbf{x}}_{\u03f5}]{\mathrm{d}}^{2}{\mathbf{x}}_{\u03f5},\phantom{\rule{0ex}{0ex}}$$A few final remarks about the speckle movements are called for now. For a source positioned very far away ${L}_{s}\gg {L}_{p}$, as for a plane wave, we see in Eq. (8) that the speckles will move in accordance with the surface movement. The speckles therefore will appear to be glued onto the object surface and follow its movement. Additionally, an extra term ${L}_{p}\nabla {L}_{\mathrm{\Sigma}}$ appears, where $\nabla {L}_{\mathrm{\Sigma}}$ refers to phase changes over the integration patch $\mathrm{\Sigma}$ as seen from the plane of the detector. That term represents speckle movement caused by gradients in the system. It is clear that the sensitivity to gradients is geared by the distance between the object surface and the detection point, and therefore the response will grow linearly with distance from the surface. We further see that the sensitivity for these gradients is determined by the sensitivity vector of the setup in relation to the direction of the local surface normal. The term ${J}_{\mathrm{\Sigma}}(\mathbf{a})$ is of significant interest in metrology. It is a second-rank tensor (although not a proper tensor), where the symmetric part is the strain tensor and the antisymmetric part is the rotation tensor. As the differentiation is performed along the surface patch, it is clear that all dependence on a variation perpendicular with the surface vanish (e.g., the ${\u03f5}_{zz}$ component of the strain tensor). The movement of the speckles in a defocused plane, therefore, depends on the deformation of the object surface rather than the movement itself. Which components of these tensors that gear the speckle movement are determined by the sensitivity vector.

## 2.2.

### Speckle Correlation in an Imaging System

We will now turn to the correlation properties of speckles in an imaging system. Consider a general optical system positioned in front of an object surface that is illuminated by an expanded laser beam, as sketched in Fig. 2. Here, I will assume that the entrance pupil of the optical system is positioned a distance $L$ from the object surface and that the detector is placed a distance ${z}_{2}$ from the exit pupil. Hence, the conjugate plane appears a distance ${z}_{1}$ in front of the entrance pupil, giving the numerical aperture $N{A}_{0}$ for the rays entering the optical system. I will call this plane the *focus plane* of the optical system. In general, therefore, a defocus $\mathrm{\Delta}L({\mathbf{x}}_{\perp 0})=L({\mathbf{x}}_{\perp 0})-{z}_{1}$ is present in the system, which may vary from point to point over the object. Further, a magnification $m=-{z}_{2}/{z}_{1}$ between the focus plane and the detection plane is present. The detection point ${\mathbf{x}}_{p}$ now becomes the focus plane, and we may write down the speckle movement in the plane of the detector directly as follows:

## (11)

$${\mathbf{A}}_{X}(\mathbf{X},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=m\mathbf{A}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s}),$$## (12)

$$|{\mathrm{\Gamma}}_{12}(\mathrm{\Delta}\mathbf{X})|={I}_{0}{\gamma}_{12}|{\int}_{\mathrm{\Omega}}P({\mathbf{x}}_{d})P({\mathbf{x}}_{d}-{\mathbf{A}}_{P}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{d},{\mathbf{x}}_{s}))\phantom{\rule{0ex}{0ex}}\mathrm{exp}[i\frac{2\pi}{\lambda}\{\mathrm{\Delta}\mathbf{X}-{\mathbf{A}}_{X}(\mathbf{X},{\mathbf{x}}_{p},{\mathbf{x}}_{s})\}\xb7{\mathbf{x}}_{d}]{\mathrm{d}}^{2}{\mathbf{x}}_{d}|,\phantom{\rule{0ex}{0ex}}$$Now if two images ${I}_{1}({\mathbf{X}}_{1})$ and ${I}_{2}({\mathbf{X}}_{2})$ are recorded, we may form the cross-covariance $\langle \mathrm{\Delta}{I}_{1}({\mathbf{X}}_{1})\mathrm{\Delta}{I}_{2}({\mathbf{X}}_{2})\rangle $ between these two images, where $\mathrm{\Delta}{I}_{i}$ is the zero-mean intensity variation of the images. This results in a correlation function ${|{\mathrm{\Gamma}}_{12}(\mathrm{\Delta}\mathbf{X})|}^{2}$ where the height of the correlation function gives the statistical similarity between the two patterns, the width of it gives the speckle size, and the position of the peak value gives the movement between the two patterns. Hence, by locating the position of the cross-covariance peak in relation to the zero position, the speckle movement ${\mathbf{A}}_{X}(\mathbf{X},{\mathbf{x}}_{p})$ is located, and if the normalized peak height $\gamma ={|{\mathrm{\Gamma}}_{12}(\mathrm{\Delta}\mathbf{X})|}_{\mathrm{max}}^{2}/{I}_{0}^{2}={\gamma}_{12}^{2}{\gamma}_{P}^{2}$ is calculated, a measure of the microstructural dynamics is obtained. In a technique known as image correlation,^{3} digital speckle photography,^{4} or particle image velocimetry,^{10} this effect is utilized. The image is then divided into a number of subimages, and the local cross-covariance is determined. The result is a vector field of speckle movements and a scalar field of correlation values that may be related to the deformation of the object. For this technique to work properly, it is important that aliasing isn’t introduced in the analysis, which means that the images need to be properly sampled. The sampling condition may be written as

## 2.3.

### Correlation Properties of Interferometric Speckles

Consider two images ${I}_{1}({\mathbf{X}}_{1})$ and ${I}_{2}({\mathbf{X}}_{2})$ recorded with a change in the system between the recordings and with an added smooth reference wave. Following any of the standard routes of interferometric detection, the two fields

*modified mutual coherence function*. With these modifications, the results from Sec. 2 may be adopted right away. In digital holographic interferometry, usually the phase change $\mathrm{\Delta}{\varphi}_{a}$ is the primary source of information. This phase change is usually detected in two modalities. The most common is to acquire the phase change in a fixed detector position, meaning that $\mathrm{\Delta}\mathbf{X}=0$. The coherence is then obtained from Eq. (12) by setting $\mathrm{\Delta}\mathbf{X}=0$. The other technique is to track the correlated speckles on the detector and calculate the interference between these.

^{11}The coherence is then obtained from Eq. (12) by setting $\mathrm{\Delta}\mathbf{X}={\mathbf{A}}_{X}$. As the speckles usually are small and the speckle movements may become significant, the difference in fringe contrast between these two ways to calculate the phase difference may become very big. For example, if the in-plane movement of the speckles becomes larger than the in-plane speckle size, the coherence becomes zero in the first case, while it may become close to unity in the latter case. However, this comes at the cost of calculation complexity.

As a final remark before moving on to some examples, the complementary information provided by the phase difference $\mathrm{\Delta}{\varphi}_{a}$ and the speckle movement ${\mathbf{A}}_{X}$ may be in place. As the speckle movement stems from a calculation of phase gradients over a plane of integration, as is clear from the expansion in Eq. (5), the speckle movement is sensitive to any gradients in $\mathrm{\Delta}{\varphi}_{a}$. By detecting the speckle motion in two planes [e.g., $L({Z}_{1})$ and $L({Z}_{2})$], the interferometric phase term may be restored. In principle, therefore, interferometric information is provided from speckle movements alone and the sometimes-cumbersome operation of adding a reference beam may be excluded. This is essentially the same idea as that taken by many researchers within X-ray phase contrast imaging, where the recording of two intensity images in two different planes is used to calculate the phase distribution necessary with the aid of the propagation of intensity equation. As argued by Paganin and Nugent, this technique has many advantages, the most striking of which is the fact that problems associated with phase wrapping are completely obsolete.^{12} In the next section, a few simple examples are discussed in relation to the results given here.

## 3.

## Discussion

The two main results from the previous section is given by Eqs. (6) and (8), respectively. It is seen that interferometric phase differences are caused by changes in wave number in relation to some relative propagation length, as well as surface movements in relation to the sensitivity vector of the setup and movements of the detection point in relation to the detection vector, respectively. At this first-order level of approximation, therefore, no cross-talk between wave number shift and deformation is considered. The speckle movements can be divided into two separate parts. The first part relates to the bulk movement of the surface, possibly geared by the curvatures of the illumination and detection wavefronts, respectively. In the case of collimated illumination, therefore, this bulk part will add a component to the total speckle movement that follows the surface movement as if the speckles were glued onto the surface. It is also obvious that these components have no direct correspondence with the phase difference between the two fields.

The second part of the speckle movement, on the other hand, becomes proportional to the gradient field of the phase field measured with a speckle interferometer. As the gradient of a scalar field is a vector field, the phase gradient provides the sensitivity at which the speckles will move to direction and magnitude. The scaling parameter that pushes this into actual movement is the distance between the generating surface and the plane of detection. In the case of an imaging system, two distances need to be considered. The distance between the object surface and the entrance pupil scales the movement of the correlation cells that enter the optical system. Such movements results in permanent decorrelation of the speckle structure and less accuracy in the measurements. The distance between the object surface and the focus plane (the plane conjugate to the detection plane) scales the movement of the speckles on the detector. The movements, therefore, have different signs when the focus plane is placed in front of or behind the object surface, respectively, but the structure of the movement will be the same. This movement may be compensated for when forming phase images to maximize the fringe contrast in the interferograms. A few typical consequences of the results from Sec. 2 are given below. The phase fields shown in the following figures are formed by calculating the phase of ${U}_{1}^{*}(\mathbf{X}){U}_{2}(\mathbf{X})$, while the speckle movement fields may be generated from image correlation between the fields ${I}_{1}(\mathbf{X})={|{U}_{1}(\mathbf{X})|}^{2}$ and ${I}_{2}(\mathbf{X})={|{U}_{2}(\mathbf{X})|}^{2}$, respectively.

Three typical situations often encountered in practical speckle metrology experiments will be discussed. Consider first a setup consisting of a plate oriented parallel to the detector of an imaging system. For simplicity, we will consider unit magnification and illumination along the optical axis of the setup. If we further assume collimated illumination and telecentric imaging, ${L}_{s}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{s})\to \infty $ and ${\mathbf{s}}_{pX}\to 0$, respectively. With these choices, the sensitivity vector $\mathbf{m}({\mathbf{x}}_{\perp 0},{\mathbf{x}}_{p},{\mathbf{x}}_{s})=2\widehat{\mathbf{z}}$, where $\widehat{\mathbf{z}}$ is parallel to the surface normal and $\mathrm{cos}\text{\hspace{0.17em}}{\theta}_{\widehat{X}}=1$. The response in a speckle interferometer due to a $3.3\lambda $ deformation of the central point is shown in the left side of Fig. 3. It is seen that the out-of-plane movement has a Gaussian shape that drops most rapidly where the distance between the phase jumps is the smallest. The corresponding speckle movement generation strength (the speckle movement per unit defocus distance) is shown in the right side of Fig. 3.

It is seen that the speckle movement is perpendicular with the phase planes in the left image and that the magnitude is inversely proportional to the distance between the phase planes. The change in sign occurs because of the inversion caused by the imaging. Two things are of general interest in relation to these results. First, the speckle decorrelation caused by movements over the entrance pupil will be most severe in regions with large phase gradients, and hence in regions with dense fringes. This is bad news for phase unwrapping software, which usually needs a certain spatial region without wrapping to perform well. Second, it is seen that the speckle movement in a defocused plane may be used to calculate the phase gradients, provided the distance to the object surface is known. Further, if the phase gradients are known, it is a trivial task to integrate them to get the actual deformation. As detection of an intensity image often is significantly less challenging and less error prone than an interferometric setup analysis, speckle movements in a defocused plane are an attractive and more robust alternative to proper phase measurements in a disturbed environment. One example is an investigation of percussion hole drilling in different metals that was performed a few years ago.^{13} In the second example, shape measurement with dual-wavelength digital holography is considered. This assumes an optical setup similar to the previous example, but in this case, the object is a diffuse spherical surface. We further assume that the length of the reference arm is tuned such that the zero phase plane coincides with the top surface of the object. Hence, all phase differences due to a change in wave number is relative to this plane. We further assume that speckle fields may be acquired in two different planes separated in depth by a distance $\mathrm{\Delta}L$. If these two planes are acquired at the same magnification $m$, the difference in speckle movement between these two planes may be expressed as

Because of the inclination of the illumination direction, the sensitivity vector becomes $\mathbf{m}=[1/\sqrt{2},0,(1+\sqrt{2})/\sqrt{2}]$, expressed in the coordinate system defined by the orientation of the detector. The projection ${\mathbf{m}}_{\mathrm{\Sigma}}$ onto the object surface is given by the first two components. If the plate is rotated 0.1 mrad around the optical axis, with a center of rotation in the middle of the field of view, the phase field shown in the left part of Fig. 5 is obtained. The recording conditions are the same as in the second example. As the sensitivity vector has a component in the plane of the plate the in-plane component corresponding to this component will generate a phase difference in the interferogram. We see that the phase only varies in the $y$-direction as expected. For the speckle movements shown in the right part of Fig. 5, two effects are blended. The bulk movement of the surface will generate a rotational pattern centered in the middle of the image. But because of the phase gradient, the center of rotation will move downward in the direction of the phase gradient. In the case shown in Fig. 5, a defocus of 3 cm has been assumed. This last example highlights the principal difference between phase and speckle movements. The speckles will move according to the bulk movement of the surface, as well as according to the phase gradients geared by the defocus distance, while the phase field only carries information about phase variations. Hence, if the speckle movement is detected in two different focus planes and subtracted, the phase gradient field may be reconstructed, but generating the speckle movement field from the phase requires a multitude of different sensitivity vectors.

## 4.

## Conclusions

In this article, the theory of dynamic speckles in reflection geometry has been reviewed. The object under consideration is allowed to have a general shape, but it should be diffuse and essentially a surface scatterer. It is then showed that the phase in a speckle pattern in general changes because of changes in the setup in relation to the sensitivity vector of the setup, while the speckle movements have a more complex behavior. The speckle movement can be divided into two distinct parts in principle. One part depends on the movement of the object, and it is independent of defocus in the system and behaves essentially as a bulk motion. The other part depends on local phase gradients along the surface patch of the object scaled by possible defocus. The phase gradients are generated from object deformations and changes in the wave number of the light and scale according to the local surface normal in relation to the sensitivity vector. This part is essentially a redirectional part that sends off a given speckle pattern in a different direction. Thus, the motion becomes dependent of defocus.

The theory have been demonstrated by three typical applications. The first was an out-of-plane bending of a plate produced by a central point source; the second was a dual-wavelength holographic recording of a general shaped object; and the third was an in-plane rotation of a plate with a sensitivity vector having an in-plane component. In all these cases, it is shown that the interferometric phase gradients and the speckle movements behave equally, but also that the speckle movements are influenced by bulk movement of the object.

## References

## Biography

**Mikael Sjodahl** received his MSc in mechanical engineering and his PhD in experimental mechanics from the Lulea University of Technology, Sweden, in 1989 and 1995, respectively. He is currently holding the chair of experimental mechanics at the Lulea University of Technology and a professorship at University West, Sweden. He has authored or coauthored over 100 papers in international journals and contributed to two books. His interests include fundamental speckle behavior, coherent optical metrology, nondestructive testing, and multidimensional signal processing.