Open Access
23 May 2013 Dynamic properties of multispectral speckles in digital holography and image correlation
Author Affiliations +
Abstract
I discuss dynamic properties of multispectral speckle in the context of digital holographic interferometry and image correlation. I outline the correlation of speckles in free space, in an imaging system, and, in the case of interferometric detection, caused by reflection off an inclined diffuse surface. It is shown that interferometric phase gradients and speckle movements are closely related where in fact the phase gradients are the generator of speckle movements in a defocused plane. The theory is exemplified by three typical situations encountered in image-plane digital holographic interferometry.

1.

Introduction

In a way, speckles are the ultimate manifestation of coherence in a wave field, as they show up (become visible) only in coherent fields. In many situations, the appearance of speckles is a problem, as the underlying intensity contrast is deteriorated but speckles are actually needed to carry coherent information in a random wave field. In this article, only speckles produced from reflection are considered, but they appear everywhere coherent scattering is present. The geometry of this problem is shown in Fig. 1. A monochromatic beam propagates from a source located at Ps to the surface Σ, where it is reflected toward the detection point Pp. If, after reflection, the amplitude and phase of the beam is known, it will be perfectly deterministic and the field in Pp can be solved using the diffraction integral. If we now add a phase screen that is completely random, a complete set of spatial frequencies is generated, including both evanescent and homogeneous components. We will also generate different polarization components, but we disregard that effect from now on. As a complete set of spatial frequencies are generated, we would expect the propagating field reflected off the phase screen to be a diffuse field, not a beam any longer. What we have constructed is a diffusor. When viewed from any direction, the wave would appear to originate from the random phase screen, not from the source generating the wave (such as a light source). Therefore, we may call the diffusor a secondary source and call the actual source a primary source. It is this property of producing a complete set of spatial frequencies that is utilized in optical imaging metrology. However, as the diffraction integral now involves random components that are added together, the result is the random field called a speckle pattern, the static properties of which have been analyzed extensively by Goodman1 and others.

Fig. 1

Definition of vectors included in the derivation of dynamic speckle properties in free-space geometry.

OE_52_10_101908_f001.png

In metrology, the change in a given speckle pattern because of a change in any of the generating variables is generally of primary interest. For example, if we change the wavelength of the coherent beam, deform the object, or somehow change the microstructural distribution of scatterers, the phase, position, and microstructure of the speckle pattern change. It is this phase change, speckle movement, or decorrelation that is utilized in metrology. Typical techniques are referred to as speckle interferometry,2 speckle or image correlation,3,4 and dynamic light scattering.5 Theoretically, the expected response in a speckle pattern due to some change in the system may be analyzed by calculating the modified mutual coherence of the field. The most notable contribution to this is from Yamaguchi,68 who has analyzed the effects of a surface deformation and wavelength shift over a plane object. From the results by Yamaguchi and others, it is concluded that speckles in a free-space geometry behave much like a grating and that the movements are generated by relative phase changes over the surface patch of integration. In Sec. 2.1, the modified mutual coherence function is generalized to include variations in wave number, object deformation, and object orientation in a free-space geometry. The term multispectral speckles is used to emphasize the fact that two (or more) distinct wavelengths are used, as opposed to a broadband source. Hence, the correlation properties between two fully developed speckle patterns are analyzed and used. In Sec. 2.2, the results from the analysis of the free-space geometry is used to analyze the behavior of speckle in an imaging system, and in Sec. 2.3, the effect of adding a smooth reference wave is included. Special attention is given to the sensitivity of phase evaluation. Section 3 discusses the properties of the results that are often encountered in speckle metrology. The results are put into perspective in the final section of the article.

2.

Correlation Properties of Dynamic Speckles

Our starting point will be the geometry sketched in Fig. 1. A monochromatic point source Ps(xs) situated at position xs illuminates a plane diffuse surface. In this article, I will limit the discussion to surface scattering, meaning that each photon has undergone only one scattering event. A general scattering point on this surface is defined by position x so that the plane wave component illuminating the scattering point propagates in direction ss=(xxs)/Ls, where Ls=|xxs| is the length between the source and the scattering point and the directional vector ss points from the source to the scattering point. The resulting field detected in point Pp(xp) at position xp in front of the field is the result of integrating the random contributions from a domain Σ on the surface defined by the solid angle Ω. I will assume Σ to be much smaller than the illuminated surface area as in an optical imaging system (e.g., where Ω is limited by the numerical aperture of the imaging system). The intensity I0 on the surface may hence be considered constant. The directional vector sp=(xpx)/Lp points from the scattering point toward the detection point, where Lp=|xpx| is the length between the scattering point and the detection point. Hence, the total length covered by a wave is L=Ls+Lp and the accumulated phase becomes ϕ(k,x,xp,xs)=kL, where the wave number k=2πν/c. By virtue of the diffraction integral, the field U(k,x,xp,xs) in point Pp is given by

Eq. (1)

U(k,x,xp,xs)=I0Ωg(s)exp[iϕ(k,x,xp,xs)]d2s,
where g(s) is the phase function of the surface and the integration is taken over all the spatial frequency components s within Ω. Any change in the system will now result in a change in the phase in Eq. (1) so that ϕϕ+δϕ. The result is a change in the speckle pattern in the neighborhood of Pp that is only partly correlated with the original pattern. If we assume a spatially incoherent source, meaning that the different components of g(s) are random and independent, we may express the correlation between two speckle fields in the vicinity of Pp as

Eq. (2)

Γ12(Δx)=U1*U2=I0γ12Σexp[iδϕ(k,x,xp,xs)]d2x,
where Δx is a spatial separation in detection space, integration over spatial frequencies has been replaced by an integration over the generating surface and γ12<1 is the microscopic coherence function.9 In the following, I will not be concerned with the microscopic coherence function, as other effects often dominate. Equation (2) is the fundamental equation for this section, and this article shall analyze it in some detail.

2.1.

Phase and Phase Gradients of Dynamic Speckles

The most important variable in Eq. (2) is the differential of the phase ϕ(k,x,xp,xs)=kL=k[Ls(x,xs)+Lp(x,xp)], where the first three variables are allowed to vary. In the following, I will utilize the Taylor expansion to the first order to approximate a variation of the depending variables. For the vector variables, we need to calculate directional derivatives of the form F(x+v)F(x)+v·F(x), where the last term gives the change in the function because of a small movement v from x. The expression v· produce a scalar differential operator that operates on the function F(x) that may be either a scalar or a vector. In the following discussion, four type of expressions will appear:

Eq. (3)

v·xL=v·s=v·sD,v·xs=vv·ssL=1L[vv·sDs],v·x(a·b)=[v·xa]·b+a·[v·xb],v·xa(x)=v·JD[a(x)],
where subscript D refers to a projection onto a plane D in which vector v is confined (if applicable), x refers to differentiation with respect to x, the vector L=Ls, where s is a unit vector, L is a directed line segment, and the function JD(a) is the Jacobian of a relative to the plane D. We may then express the phase shift as
δϕ(k,x,xp,xs)Δkϕk+a(x)·xϕ+Δx·xpϕ,
where Δk=k2k1 indicates a change in the wave number, a(x)=x2x1 is a movement of a surface point, and Δx=xp2xp1 indicates a movement of the detection point. With the help of Eq. (3), the phase shift may be expressed as

Eq. (4)

δϕ(k,x,xp,xs)ΔkL(x,xp,xs)km(x,xp,xs)·a(x)+ksp(x,xp)·Δx,
where the vector m(x,xp,xs)=sp(x,xp)ss(x,xs) is known as the sensitivity vector of the setup. We see that the phase changes due to a change in the wave number in proportion to the distance traveled by the wave, but also due to an object point movement a(x) in relation to the sensitivity vector and a change in detection point Δx in relation to the observation point direction.

The next thing to consider is the integral over the surface patch Σ in Eq. (2). To handle that, I introduce the central position x0 within Σ and the local variable xϵ confined to the surface patch so that x=x0+xϵ. By virtue of the Taylor expansion, I then get

Eq. (5)

δϕ(k,x,xp,xs)δϕ(k,x0,xp,xs)+xϵ·x(δϕ),
where, again, only first order terms are considered. Two type of phase terms are obtained. The first term,

Eq. (6)

Δϕa=δϕ(k,x0,xp,xs)=ΔkL(x0,xp,xs)km(x0,xp,xs)·a(x0)+ksp(x0,xp)·Δx,
is an absolute phase term that is independent of the integration variable xϵ and may be moved outside the integral in Eq. (2). This term will later be recognized as the static or absolute phase difference and is the phase difference that is measured in a speckle interferometer. The second term is the differential phase difference that, with the help of the relations in Eq. (3), is expressed as

Eq. (7)

Δϕd=kLpxϵ·ΔxΣΔkxϵ·mΣ(x0,xp,xs)kLp[xϵ·aΣ(x0)xϵ·spΣap(x0)]+kLs[xϵ·aΣ(x0)xϵ·ssΣas(x0)]kxϵ·m(x0,xp,xs)·JΣ(a),
where full use has been made of the relations in Eq. (3) and a global frame of reference is oriented such that a plane of detection (e.g., an image sensor) becomes roughly perpendicular with sp, in which case sp·Δx0 and the vector xϵ is confined to the area Σ on the surface patch. The advantage with Eq. (7) is that the expression k/Lpxϵ· appears (or can be made to appear through multiplication) in all terms and may therefore be moved outside the parentheses. The expression within the parentheses may then be written as ΔxA(x0,xp,xs), where

Eq. (8)

A(x0,xp,xs)=[1+Lp(x0,xp)Ls(x0,xs)]aXaZ[spXLp(x0,xp)Ls(x0,xs)ssX]+Lp(x0,xp)cosθX^[mΣ(x0,xp,xs)Δkk+m(x0,xp,xs)·JΣ(a)]
is the speckle movement in the plane of detection. Equation (8) calls for some clarifications. First, the speckle movement vector A is the projection of the speckle movement in the plane of the detector (perpendicular to the optical axis). Also, the object displacement vector has been changed so that the vector aX refers to the projection of the displacement vector onto the plane of the detector, while the component aZ refers to the component parallel with the optical axis (the axial displacement). In the last term, an expression mΣ(x0,xp,xs) appears in the numerator. This is the projection of the sensitivity vector onto the local surface patch and gives a vector that is perpendicular to the surface normal vector n. The magnitude of mΣ gives the magnitude with which the speckle movement is geared because of a change in wave number, and its direction is the direction in which the speckles move. Also, note that JΣ(a) is an improper tensor of rank 2. The components of this tensor is most suitable expressed in terms of the local coordinate system, in which case the last column is zero. Therefore, the local sensitivity vector m is most appropriately expressed in the same local coordinate system where the multiplication between the two results in a two-component vector corresponding to the two measured directions of speckle movement. The scaling parameter cosθX^ relates to the orientation of the detector to the surface patch, where θX^ is the angle between Δx and ΔxΣ. It has a similar effect as the obliquity factor that appears in classical diffraction theory.

With the aid of the above, we may rewrite Eq. (2) as

Eq. (9)

Γ12(Δx)=I0exp[iΔϕa]γ12γs(Δx),
where the deterministic phase of the coherence function is Δϕa and the speckle correlation function is

Eq. (10)

γs(Δx)=Σexp[ikLp(x0,xp,xs){ΔxA(x0,xp,xs)}·xϵ]d2xϵ,
where often only the magnitude is of practical interest. As written, the coherence become γ12 when Δx=A and drops rapidly away from the correlation top. In general, as will be discussed in the next section, the coherence will be lower.

A few final remarks about the speckle movements are called for now. For a source positioned very far away LsLp, as for a plane wave, we see in Eq. (8) that the speckles will move in accordance with the surface movement. The speckles therefore will appear to be glued onto the object surface and follow its movement. Additionally, an extra term LpLΣ appears, where LΣ refers to phase changes over the integration patch Σ as seen from the plane of the detector. That term represents speckle movement caused by gradients in the system. It is clear that the sensitivity to gradients is geared by the distance between the object surface and the detection point, and therefore the response will grow linearly with distance from the surface. We further see that the sensitivity for these gradients is determined by the sensitivity vector of the setup in relation to the direction of the local surface normal. The term JΣ(a) is of significant interest in metrology. It is a second-rank tensor (although not a proper tensor), where the symmetric part is the strain tensor and the antisymmetric part is the rotation tensor. As the differentiation is performed along the surface patch, it is clear that all dependence on a variation perpendicular with the surface vanish (e.g., the ϵzz component of the strain tensor). The movement of the speckles in a defocused plane, therefore, depends on the deformation of the object surface rather than the movement itself. Which components of these tensors that gear the speckle movement are determined by the sensitivity vector.

2.2.

Speckle Correlation in an Imaging System

We will now turn to the correlation properties of speckles in an imaging system. Consider a general optical system positioned in front of an object surface that is illuminated by an expanded laser beam, as sketched in Fig. 2. Here, I will assume that the entrance pupil of the optical system is positioned a distance L from the object surface and that the detector is placed a distance z2 from the exit pupil. Hence, the conjugate plane appears a distance z1 in front of the entrance pupil, giving the numerical aperture NA0 for the rays entering the optical system. I will call this plane the focus plane of the optical system. In general, therefore, a defocus ΔL(x0)=L(x0)z1 is present in the system, which may vary from point to point over the object. Further, a magnification m=z2/z1 between the focus plane and the detection plane is present. The detection point xp now becomes the focus plane, and we may write down the speckle movement in the plane of the detector directly as follows:

Eq. (11)

AX(X,xp,xs)=mA(x0,xp,xs),
where X is a position in detector space and where the focus plane speckle movement A is given by Eq. (8) if the detection point distance Lp(x0,xp) is replaced by the defocus distance ΔL(x0,xp). We see that if the surface is focused properly, the speckle movement coincide with the surface movement and if defocus is introduced any gradients in the setup result in speckle movement. We next turn to the correlation properties of the speckles in the image plane. By virtue of Eq. (10), we may immediately write

Eq. (12)

|Γ12(ΔX)|=I0γ12|ΩP(xd)P(xdAP(x0,xd,xs))exp[i2πλ{ΔXAX(X,xp,xs)}·xd]d2xd|,
where Ω is the solid angle in image space that reduces the spatial frequencies available to build up a speckle point and xd is a coordinate on the entrance pupil sphere. The pupil function P(xd) is unity within Ω and zero outside, and the speckle movement AP(x0,xd,xs) over the entrance pupil is given by Eq. (8) if the detection point distance Lp(x0,xp) is replaced by the distance L(x0,xd). Equation (12) is maximized if ΔX=AX(X,xp,xs) giving the correlation parameter γP=|Γ12(A)|/I0 that describes the decorrelation effect of the imaged speckles as a result of correlation cells moving out of the entrance pupil and replaced by new incoherent ones.

Fig. 2

Definition of quantities included in the derivation of dynamic image space speckle properties.

OE_52_10_101908_f002.png

Now if two images I1(X1) and I2(X2) are recorded, we may form the cross-covariance ΔI1(X1)ΔI2(X2) between these two images, where ΔIi is the zero-mean intensity variation of the images. This results in a correlation function |Γ12(ΔX)|2 where the height of the correlation function gives the statistical similarity between the two patterns, the width of it gives the speckle size, and the position of the peak value gives the movement between the two patterns. Hence, by locating the position of the cross-covariance peak in relation to the zero position, the speckle movement AX(X,xp) is located, and if the normalized peak height γ=|Γ12(ΔX)|max2/I02=γ122γP2 is calculated, a measure of the microstructural dynamics is obtained. In a technique known as image correlation,3 digital speckle photography,4 or particle image velocimetry,10 this effect is utilized. The image is then divided into a number of subimages, and the local cross-covariance is determined. The result is a vector field of speckle movements and a scalar field of correlation values that may be related to the deformation of the object. For this technique to work properly, it is important that aliasing isn’t introduced in the analysis, which means that the images need to be properly sampled. The sampling condition may be written as

Eq. (13)

NA1<λ4a,
where a is the sampling pitch of the detector and NA1 the numerical aperture on the image side limited by Ω. The frequency resolution becomes 1/Na, where N is the number of pixels in a given direction. In this case, no reference wave is added and the central lobe may fill all the spatial frequency room available. The next objective is to change this and consider interferometric detection.

2.3.

Correlation Properties of Interferometric Speckles

Consider two images I1(X1) and I2(X2) recorded with a change in the system between the recordings and with an added smooth reference wave. Following any of the standard routes of interferometric detection, the two fields

U1(X1)=IRI1(X1)exp{i[ϕ1(X1,k1)ϕR(X1,k1)]}
and
U2(X2)=IRI2(X2)exp{i[ϕ2(X2,k2)ϕR(X2,k2)]}
are restored. Three things are important to note with these expressions. First, only the components in the original object field that are coherent with the reference wave are restored in the fields U1 and U2, respectively. This means that stray light and components that have changed polarization are filtered out from the field, which in general is a good thing. Second, the object field I(X)exp[iϕ1(X,k)] is modified by the reference field IRexp[iϕR(X,k)] in both magnitude and phase. The change in magnitude means that the object field is magnified by an amount IR upon detection, which means that a weak object signal may be amplified by a strong reference wave. The ability to control the strength of the detected signal is a tremendous advantage in many applications, as the dynamics of the detector is limited by its digitization depth (usually 8 or 12 bits). The change in phase is a little more intriguing. As the reference wave usually is a point source in the exit pupil plane, the reference wave field falling onto the detector generally will be a spherical wave with a curvature equal to z2. This phase curvature needs to be withdrawn from the object field before it can be properly propagated to other detection planes (if this is required). Further, the accumulated plane wave phase equals ϕR(k)=kLR, with which the object phase will be reversed. The total deterministic phase of U therefore will be kδL, where δL=LOLR is the difference in length between the object wave going from the source to the detector through the object and imaging system, and the length of the reference arm, respectively. Third, as I(X)exp[iϕO(X,k)] will be random, both U1 and U2 are also random processes. Neither U1 nor U2, therefore, contains much useful information on its own. What needs to be calculated is the coherence Γ12(X1,X2)=U1*(X1)U2(X2), known as the modified mutual coherence function. With these modifications, the results from Sec. 2 may be adopted right away. In digital holographic interferometry, usually the phase change Δϕa is the primary source of information. This phase change is usually detected in two modalities. The most common is to acquire the phase change in a fixed detector position, meaning that ΔX=0. The coherence is then obtained from Eq. (12) by setting ΔX=0. The other technique is to track the correlated speckles on the detector and calculate the interference between these.11 The coherence is then obtained from Eq. (12) by setting ΔX=AX. As the speckles usually are small and the speckle movements may become significant, the difference in fringe contrast between these two ways to calculate the phase difference may become very big. For example, if the in-plane movement of the speckles becomes larger than the in-plane speckle size, the coherence becomes zero in the first case, while it may become close to unity in the latter case. However, this comes at the cost of calculation complexity.

As a final remark before moving on to some examples, the complementary information provided by the phase difference Δϕa and the speckle movement AX may be in place. As the speckle movement stems from a calculation of phase gradients over a plane of integration, as is clear from the expansion in Eq. (5), the speckle movement is sensitive to any gradients in Δϕa. By detecting the speckle motion in two planes [e.g., L(Z1) and L(Z2)], the interferometric phase term may be restored. In principle, therefore, interferometric information is provided from speckle movements alone and the sometimes-cumbersome operation of adding a reference beam may be excluded. This is essentially the same idea as that taken by many researchers within X-ray phase contrast imaging, where the recording of two intensity images in two different planes is used to calculate the phase distribution necessary with the aid of the propagation of intensity equation. As argued by Paganin and Nugent, this technique has many advantages, the most striking of which is the fact that problems associated with phase wrapping are completely obsolete.12 In the next section, a few simple examples are discussed in relation to the results given here.

3.

Discussion

The two main results from the previous section is given by Eqs. (6) and (8), respectively. It is seen that interferometric phase differences are caused by changes in wave number in relation to some relative propagation length, as well as surface movements in relation to the sensitivity vector of the setup and movements of the detection point in relation to the detection vector, respectively. At this first-order level of approximation, therefore, no cross-talk between wave number shift and deformation is considered. The speckle movements can be divided into two separate parts. The first part relates to the bulk movement of the surface, possibly geared by the curvatures of the illumination and detection wavefronts, respectively. In the case of collimated illumination, therefore, this bulk part will add a component to the total speckle movement that follows the surface movement as if the speckles were glued onto the surface. It is also obvious that these components have no direct correspondence with the phase difference between the two fields.

The second part of the speckle movement, on the other hand, becomes proportional to the gradient field of the phase field measured with a speckle interferometer. As the gradient of a scalar field is a vector field, the phase gradient provides the sensitivity at which the speckles will move to direction and magnitude. The scaling parameter that pushes this into actual movement is the distance between the generating surface and the plane of detection. In the case of an imaging system, two distances need to be considered. The distance between the object surface and the entrance pupil scales the movement of the correlation cells that enter the optical system. Such movements results in permanent decorrelation of the speckle structure and less accuracy in the measurements. The distance between the object surface and the focus plane (the plane conjugate to the detection plane) scales the movement of the speckles on the detector. The movements, therefore, have different signs when the focus plane is placed in front of or behind the object surface, respectively, but the structure of the movement will be the same. This movement may be compensated for when forming phase images to maximize the fringe contrast in the interferograms. A few typical consequences of the results from Sec. 2 are given below. The phase fields shown in the following figures are formed by calculating the phase of U1*(X)U2(X), while the speckle movement fields may be generated from image correlation between the fields I1(X)=|U1(X)|2 and I2(X)=|U2(X)|2, respectively.

Three typical situations often encountered in practical speckle metrology experiments will be discussed. Consider first a setup consisting of a plate oriented parallel to the detector of an imaging system. For simplicity, we will consider unit magnification and illumination along the optical axis of the setup. If we further assume collimated illumination and telecentric imaging, Ls(x0,xs) and spX0, respectively. With these choices, the sensitivity vector m(x0,xp,xs)=2z^, where z^ is parallel to the surface normal and cosθX^=1. The response in a speckle interferometer due to a 3.3λ deformation of the central point is shown in the left side of Fig. 3. It is seen that the out-of-plane movement has a Gaussian shape that drops most rapidly where the distance between the phase jumps is the smallest. The corresponding speckle movement generation strength (the speckle movement per unit defocus distance) is shown in the right side of Fig. 3.

Fig. 3

Phase response (a) and corresponding speckle motion field (b) due to an out-of-plane point load in the center of a plate.

OE_52_10_101908_f003.png

It is seen that the speckle movement is perpendicular with the phase planes in the left image and that the magnitude is inversely proportional to the distance between the phase planes. The change in sign occurs because of the inversion caused by the imaging. Two things are of general interest in relation to these results. First, the speckle decorrelation caused by movements over the entrance pupil will be most severe in regions with large phase gradients, and hence in regions with dense fringes. This is bad news for phase unwrapping software, which usually needs a certain spatial region without wrapping to perform well. Second, it is seen that the speckle movement in a defocused plane may be used to calculate the phase gradients, provided the distance to the object surface is known. Further, if the phase gradients are known, it is a trivial task to integrate them to get the actual deformation. As detection of an intensity image often is significantly less challenging and less error prone than an interferometric setup analysis, speckle movements in a defocused plane are an attractive and more robust alternative to proper phase measurements in a disturbed environment. One example is an investigation of percussion hole drilling in different metals that was performed a few years ago.13 In the second example, shape measurement with dual-wavelength digital holography is considered. This assumes an optical setup similar to the previous example, but in this case, the object is a diffuse spherical surface. We further assume that the length of the reference arm is tuned such that the zero phase plane coincides with the top surface of the object. Hence, all phase differences due to a change in wave number is relative to this plane. We further assume that speckle fields may be acquired in two different planes separated in depth by a distance ΔL. If these two planes are acquired at the same magnification m, the difference in speckle movement between these two planes may be expressed as

A(x0,xp,xs)=2mΔL(x0,xp)tanθΔkk,
where θ is the angle between the local surface normal and the optical axis. Figure 4 shows the response in phase and relative speckle movement of measurement on a spherical surface with a radius of 2 dm over a 5×5-cm area due to a wavelength shift of 1 nm from the 500-nm wavelength. It is seen in the left part of the image that the phase drops more and more rapidly the farther away from the center of the image one moves. The same trend is seen in the right part of Fig. 4, where the difference in speckle movement between two planes separated by a unit distance is shown. Comparing these two images, it can be concluded that the speckles move according to the gradients in the phase fields to size and direction. The reason for using two speckle fields in this case is that for a generally shaped object, the defocus distance will be unknown and speckle movement cannot be transformed directly to a local defocus value unless tanθ is known. From the results in Fig. 4, it may be concluded that for a generally shaped object, parts of the measurement field will always be out of focus, generating image plane speckle movements. To avoid decorrelation induced by image plane speckle movements, it is advantageous to place the focus in a plane containing large phase gradients, in this case as far back as possible, corresponding to the outer part of the measurement field. However, that will not prevent the coherence cells to move in the entrance pupil plane and fringe contrast might become poor on steep slopes anyway. This problem may be solved only with multiple recordings involving a set of small wavelength shifts and/or a set of different illumination directions. The last example involves the same setup as before, but in this case, the illumination makes a 45-deg angle to the optical axis in the xz plane. As in the first example, the object is a plate placed in parallel with the focus plane.

Fig. 4

Phase response (a) and corresponding speckle motion field (b) due to a wavelength shift.

OE_52_10_101908_f004.png

Because of the inclination of the illumination direction, the sensitivity vector becomes m=[1/2,0,(1+2)/2], expressed in the coordinate system defined by the orientation of the detector. The projection mΣ onto the object surface is given by the first two components. If the plate is rotated 0.1 mrad around the optical axis, with a center of rotation in the middle of the field of view, the phase field shown in the left part of Fig. 5 is obtained. The recording conditions are the same as in the second example. As the sensitivity vector has a component in the plane of the plate the in-plane component corresponding to this component will generate a phase difference in the interferogram. We see that the phase only varies in the y-direction as expected. For the speckle movements shown in the right part of Fig. 5, two effects are blended. The bulk movement of the surface will generate a rotational pattern centered in the middle of the image. But because of the phase gradient, the center of rotation will move downward in the direction of the phase gradient. In the case shown in Fig. 5, a defocus of 3 cm has been assumed. This last example highlights the principal difference between phase and speckle movements. The speckles will move according to the bulk movement of the surface, as well as according to the phase gradients geared by the defocus distance, while the phase field only carries information about phase variations. Hence, if the speckle movement is detected in two different focus planes and subtracted, the phase gradient field may be reconstructed, but generating the speckle movement field from the phase requires a multitude of different sensitivity vectors.

Fig. 5

Phase response (a) and corresponding speckle motion field (b) due to an in-plane rotation around the center of a plate.

OE_52_10_101908_f005.png

4.

Conclusions

In this article, the theory of dynamic speckles in reflection geometry has been reviewed. The object under consideration is allowed to have a general shape, but it should be diffuse and essentially a surface scatterer. It is then showed that the phase in a speckle pattern in general changes because of changes in the setup in relation to the sensitivity vector of the setup, while the speckle movements have a more complex behavior. The speckle movement can be divided into two distinct parts in principle. One part depends on the movement of the object, and it is independent of defocus in the system and behaves essentially as a bulk motion. The other part depends on local phase gradients along the surface patch of the object scaled by possible defocus. The phase gradients are generated from object deformations and changes in the wave number of the light and scale according to the local surface normal in relation to the sensitivity vector. This part is essentially a redirectional part that sends off a given speckle pattern in a different direction. Thus, the motion becomes dependent of defocus.

The theory have been demonstrated by three typical applications. The first was an out-of-plane bending of a plate produced by a central point source; the second was a dual-wavelength holographic recording of a general shaped object; and the third was an in-plane rotation of a plate with a sensitivity vector having an in-plane component. In all these cases, it is shown that the interferometric phase gradients and the speckle movements behave equally, but also that the speckle movements are influenced by bulk movement of the object.

References

1. 

J. W. Goodman, “Statistical properties of laser speckle patterns,” Laser Speckle and Related Phenomena, 9 –75 Springer-Verlag, Berlin Heidelberg (1975). Google Scholar

2. 

P. K. Rastogi, “Measurement of static surface displacements, derivative of displacements, and three-dimensional surface shapes—examples of applications to non-destructive testing,” Digital Speckle Pattern Interferometry and Related Techniques, 171 –224 Wiley, Chichester (2001). Google Scholar

3. 

M. A. SuttonJ.-J. OrteuH. W. Schreier, Image Correlation for Shape, Motion and Deformation Measurements, Springer, New York (2009). Google Scholar

4. 

M. Sjodahl, “Digital speckle photography,” Digital Speckle Pattern Interferometry and Related Techniques, 289 –336 Wiley, Chichester (2001). Google Scholar

5. 

W. Brown, Dynamic Light Scattering: The Method and Some Applications, Clarendon Press, Oxford (1993). Google Scholar

6. 

I. Yamaguchi, “Speckle displacement and decorrelation in the diffraction and image fields for small object deformation,” Opt. Acta, 28 (10), 1359 –1376 (1981). http://dx.doi.org/10.1080/713820454 OPACAT 0030-3909 Google Scholar

7. 

I. Yamaguchi, “Fringe formation in deformation and vibration measurements using laser light,” Progress in Optics, XXII, 272 –340 Elsevier, Amsterdam (1985). Google Scholar

8. 

I. YamaguchiA. YamamotoS. Kuwamura, “Speckle decorrelation in surface profilometry by wavelength scanning interferometry,” Appl. Opt., 37 (28), 6721 –6728 (1998). http://dx.doi.org/10.1364/AO.37.006721 APOPAI 0003-6935 Google Scholar

9. 

Speckle Phenomena in Optics: Theory and Applications, Roberts and Company, Englewood, CO (2007). Google Scholar

10. 

K. D. Hinsch, “Particle image velocimetry,” Speckle Metrology, 235 –324 Marcel Dekker, New York (1993). Google Scholar

11. 

A. AnderssonA. RunnemalmM. Sjodahl, “Digital speckle pattern interferometry: fringe retrieval for large in-plane deformations with digital speckle photography,” Appl. Opt., 38 (25), 5408 –5412 (1999). http://dx.doi.org/10.1364/AO.38.005408 APOPAI 0003-6935 Google Scholar

12. 

D. PaganinK. A. Nugent, “Non-interferometric phase imaging with partially-coherent light,” Phys. Rev. Lett., 80 (12), 2586 –2589 (1998). http://dx.doi.org/10.1103/PhysRevLett.80.2586 PRLTAO 0031-9007 Google Scholar

13. 

N. Miroshnikovaet al., “Percussion hole drilling of metals with a fourth-harmonic nd:yag laser studied by defocused laser speckle correlation,” Appl. Opt., 44 (17), 3403 –3408 (2005). http://dx.doi.org/10.1364/AO.44.003403 APOPAI 0003-6935 Google Scholar

Biography

OE_52_10_101908_d001.png

Mikael Sjodahl received his MSc in mechanical engineering and his PhD in experimental mechanics from the Lulea University of Technology, Sweden, in 1989 and 1995, respectively. He is currently holding the chair of experimental mechanics at the Lulea University of Technology and a professorship at University West, Sweden. He has authored or coauthored over 100 papers in international journals and contributed to two books. His interests include fundamental speckle behavior, coherent optical metrology, nondestructive testing, and multidimensional signal processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Mikael Sjodahl "Dynamic properties of multispectral speckles in digital holography and image correlation," Optical Engineering 52(10), 101908 (23 May 2013). https://doi.org/10.1117/1.OE.52.10.101908
Published: 23 May 2013
Lens.org Logo
CITATIONS
Cited by 15 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Speckle

Digital holography

Sensors

Digital image correlation

Digital imaging

Interferometry

Speckle pattern

Back to Top