In a way, speckles are the ultimate manifestation of coherence in a wave field, as they show up (become visible) only in coherent fields. In many situations, the appearance of speckles is a problem, as the underlying intensity contrast is deteriorated but speckles are actually needed to carry coherent information in a random wave field. In this article, only speckles produced from reflection are considered, but they appear everywhere coherent scattering is present. The geometry of this problem is shown in Fig. 1. A monochromatic beam propagates from a source located at to the surface , where it is reflected toward the detection point . If, after reflection, the amplitude and phase of the beam is known, it will be perfectly deterministic and the field in can be solved using the diffraction integral. If we now add a phase screen that is completely random, a complete set of spatial frequencies is generated, including both evanescent and homogeneous components. We will also generate different polarization components, but we disregard that effect from now on. As a complete set of spatial frequencies are generated, we would expect the propagating field reflected off the phase screen to be a diffuse field, not a beam any longer. What we have constructed is a diffusor. When viewed from any direction, the wave would appear to originate from the random phase screen, not from the source generating the wave (such as a light source). Therefore, we may call the diffusor a secondary source and call the actual source a primary source. It is this property of producing a complete set of spatial frequencies that is utilized in optical imaging metrology. However, as the diffraction integral now involves random components that are added together, the result is the random field called a speckle pattern, the static properties of which have been analyzed extensively by Goodman1 and others.
In metrology, the change in a given speckle pattern because of a change in any of the generating variables is generally of primary interest. For example, if we change the wavelength of the coherent beam, deform the object, or somehow change the microstructural distribution of scatterers, the phase, position, and microstructure of the speckle pattern change. It is this phase change, speckle movement, or decorrelation that is utilized in metrology. Typical techniques are referred to as speckle interferometry,2 speckle or image correlation,3,4 and dynamic light scattering.5 Theoretically, the expected response in a speckle pattern due to some change in the system may be analyzed by calculating the modified mutual coherence of the field. The most notable contribution to this is from Yamaguchi,67.–8 who has analyzed the effects of a surface deformation and wavelength shift over a plane object. From the results by Yamaguchi and others, it is concluded that speckles in a free-space geometry behave much like a grating and that the movements are generated by relative phase changes over the surface patch of integration. In Sec. 2.1, the modified mutual coherence function is generalized to include variations in wave number, object deformation, and object orientation in a free-space geometry. The term multispectral speckles is used to emphasize the fact that two (or more) distinct wavelengths are used, as opposed to a broadband source. Hence, the correlation properties between two fully developed speckle patterns are analyzed and used. In Sec. 2.2, the results from the analysis of the free-space geometry is used to analyze the behavior of speckle in an imaging system, and in Sec. 2.3, the effect of adding a smooth reference wave is included. Special attention is given to the sensitivity of phase evaluation. Section 3 discusses the properties of the results that are often encountered in speckle metrology. The results are put into perspective in the final section of the article.
Correlation Properties of Dynamic Speckles
Our starting point will be the geometry sketched in Fig. 1. A monochromatic point source situated at position illuminates a plane diffuse surface. In this article, I will limit the discussion to surface scattering, meaning that each photon has undergone only one scattering event. A general scattering point on this surface is defined by position so that the plane wave component illuminating the scattering point propagates in direction , where is the length between the source and the scattering point and the directional vector points from the source to the scattering point. The resulting field detected in point at position in front of the field is the result of integrating the random contributions from a domain on the surface defined by the solid angle . I will assume to be much smaller than the illuminated surface area as in an optical imaging system (e.g., where is limited by the numerical aperture of the imaging system). The intensity on the surface may hence be considered constant. The directional vector points from the scattering point toward the detection point, where is the length between the scattering point and the detection point. Hence, the total length covered by a wave is and the accumulated phase becomes , where the wave number . By virtue of the diffraction integral, the field in point is given by1) so that . The result is a change in the speckle pattern in the neighborhood of that is only partly correlated with the original pattern. If we assume a spatially incoherent source, meaning that the different components of are random and independent, we may express the correlation between two speckle fields in the vicinity of as 9 In the following, I will not be concerned with the microscopic coherence function, as other effects often dominate. Equation (2) is the fundamental equation for this section, and this article shall analyze it in some detail.
Phase and Phase Gradients of Dynamic Speckles
The most important variable in Eq. (2) is the differential of the phase , where the first three variables are allowed to vary. In the following, I will utilize the Taylor expansion to the first order to approximate a variation of the depending variables. For the vector variables, we need to calculate directional derivatives of the form , where the last term gives the change in the function because of a small movement from . The expression produce a scalar differential operator that operates on the function that may be either a scalar or a vector. In the following discussion, four type of expressions will appear:3), the phase shift may be expressed as
The next thing to consider is the integral over the surface patch in Eq. (2). To handle that, I introduce the central position within and the local variable confined to the surface patch so that . By virtue of the Taylor expansion, I then get2). This term will later be recognized as the static or absolute phase difference and is the phase difference that is measured in a speckle interferometer. The second term is the differential phase difference that, with the help of the relations in Eq. (3), is expressed as 3) and a global frame of reference is oriented such that a plane of detection (e.g., an image sensor) becomes roughly perpendicular with , in which case and the vector is confined to the area on the surface patch. The advantage with Eq. (7) is that the expression appears (or can be made to appear through multiplication) in all terms and may therefore be moved outside the parentheses. The expression within the parentheses may then be written as , where 8) calls for some clarifications. First, the speckle movement vector is the projection of the speckle movement in the plane of the detector (perpendicular to the optical axis). Also, the object displacement vector has been changed so that the vector refers to the projection of the displacement vector onto the plane of the detector, while the component refers to the component parallel with the optical axis (the axial displacement). In the last term, an expression appears in the numerator. This is the projection of the sensitivity vector onto the local surface patch and gives a vector that is perpendicular to the surface normal vector . The magnitude of gives the magnitude with which the speckle movement is geared because of a change in wave number, and its direction is the direction in which the speckles move. Also, note that is an improper tensor of rank 2. The components of this tensor is most suitable expressed in terms of the local coordinate system, in which case the last column is zero. Therefore, the local sensitivity vector is most appropriately expressed in the same local coordinate system where the multiplication between the two results in a two-component vector corresponding to the two measured directions of speckle movement. The scaling parameter relates to the orientation of the detector to the surface patch, where is the angle between and . It has a similar effect as the obliquity factor that appears in classical diffraction theory.
With the aid of the above, we may rewrite Eq. (2) as
A few final remarks about the speckle movements are called for now. For a source positioned very far away , as for a plane wave, we see in Eq. (8) that the speckles will move in accordance with the surface movement. The speckles therefore will appear to be glued onto the object surface and follow its movement. Additionally, an extra term appears, where refers to phase changes over the integration patch as seen from the plane of the detector. That term represents speckle movement caused by gradients in the system. It is clear that the sensitivity to gradients is geared by the distance between the object surface and the detection point, and therefore the response will grow linearly with distance from the surface. We further see that the sensitivity for these gradients is determined by the sensitivity vector of the setup in relation to the direction of the local surface normal. The term is of significant interest in metrology. It is a second-rank tensor (although not a proper tensor), where the symmetric part is the strain tensor and the antisymmetric part is the rotation tensor. As the differentiation is performed along the surface patch, it is clear that all dependence on a variation perpendicular with the surface vanish (e.g., the component of the strain tensor). The movement of the speckles in a defocused plane, therefore, depends on the deformation of the object surface rather than the movement itself. Which components of these tensors that gear the speckle movement are determined by the sensitivity vector.
Speckle Correlation in an Imaging System
We will now turn to the correlation properties of speckles in an imaging system. Consider a general optical system positioned in front of an object surface that is illuminated by an expanded laser beam, as sketched in Fig. 2. Here, I will assume that the entrance pupil of the optical system is positioned a distance from the object surface and that the detector is placed a distance from the exit pupil. Hence, the conjugate plane appears a distance in front of the entrance pupil, giving the numerical aperture for the rays entering the optical system. I will call this plane the focus plane of the optical system. In general, therefore, a defocus is present in the system, which may vary from point to point over the object. Further, a magnification between the focus plane and the detection plane is present. The detection point now becomes the focus plane, and we may write down the speckle movement in the plane of the detector directly as follows:8) if the detection point distance is replaced by the defocus distance . We see that if the surface is focused properly, the speckle movement coincide with the surface movement and if defocus is introduced any gradients in the setup result in speckle movement. We next turn to the correlation properties of the speckles in the image plane. By virtue of Eq. (10), we may immediately write 8) if the detection point distance is replaced by the distance . Equation (12) is maximized if giving the correlation parameter that describes the decorrelation effect of the imaged speckles as a result of correlation cells moving out of the entrance pupil and replaced by new incoherent ones.
Now if two images and are recorded, we may form the cross-covariance between these two images, where is the zero-mean intensity variation of the images. This results in a correlation function where the height of the correlation function gives the statistical similarity between the two patterns, the width of it gives the speckle size, and the position of the peak value gives the movement between the two patterns. Hence, by locating the position of the cross-covariance peak in relation to the zero position, the speckle movement is located, and if the normalized peak height is calculated, a measure of the microstructural dynamics is obtained. In a technique known as image correlation,3 digital speckle photography,4 or particle image velocimetry,10 this effect is utilized. The image is then divided into a number of subimages, and the local cross-covariance is determined. The result is a vector field of speckle movements and a scalar field of correlation values that may be related to the deformation of the object. For this technique to work properly, it is important that aliasing isn’t introduced in the analysis, which means that the images need to be properly sampled. The sampling condition may be written as
Correlation Properties of Interferometric Speckles
Consider two images and recorded with a change in the system between the recordings and with an added smooth reference wave. Following any of the standard routes of interferometric detection, the two fields2 may be adopted right away. In digital holographic interferometry, usually the phase change is the primary source of information. This phase change is usually detected in two modalities. The most common is to acquire the phase change in a fixed detector position, meaning that . The coherence is then obtained from Eq. (12) by setting . The other technique is to track the correlated speckles on the detector and calculate the interference between these.11 The coherence is then obtained from Eq. (12) by setting . As the speckles usually are small and the speckle movements may become significant, the difference in fringe contrast between these two ways to calculate the phase difference may become very big. For example, if the in-plane movement of the speckles becomes larger than the in-plane speckle size, the coherence becomes zero in the first case, while it may become close to unity in the latter case. However, this comes at the cost of calculation complexity.
As a final remark before moving on to some examples, the complementary information provided by the phase difference and the speckle movement may be in place. As the speckle movement stems from a calculation of phase gradients over a plane of integration, as is clear from the expansion in Eq. (5), the speckle movement is sensitive to any gradients in . By detecting the speckle motion in two planes [e.g., and ], the interferometric phase term may be restored. In principle, therefore, interferometric information is provided from speckle movements alone and the sometimes-cumbersome operation of adding a reference beam may be excluded. This is essentially the same idea as that taken by many researchers within X-ray phase contrast imaging, where the recording of two intensity images in two different planes is used to calculate the phase distribution necessary with the aid of the propagation of intensity equation. As argued by Paganin and Nugent, this technique has many advantages, the most striking of which is the fact that problems associated with phase wrapping are completely obsolete.12 In the next section, a few simple examples are discussed in relation to the results given here.
The two main results from the previous section is given by Eqs. (6) and (8), respectively. It is seen that interferometric phase differences are caused by changes in wave number in relation to some relative propagation length, as well as surface movements in relation to the sensitivity vector of the setup and movements of the detection point in relation to the detection vector, respectively. At this first-order level of approximation, therefore, no cross-talk between wave number shift and deformation is considered. The speckle movements can be divided into two separate parts. The first part relates to the bulk movement of the surface, possibly geared by the curvatures of the illumination and detection wavefronts, respectively. In the case of collimated illumination, therefore, this bulk part will add a component to the total speckle movement that follows the surface movement as if the speckles were glued onto the surface. It is also obvious that these components have no direct correspondence with the phase difference between the two fields.
The second part of the speckle movement, on the other hand, becomes proportional to the gradient field of the phase field measured with a speckle interferometer. As the gradient of a scalar field is a vector field, the phase gradient provides the sensitivity at which the speckles will move to direction and magnitude. The scaling parameter that pushes this into actual movement is the distance between the generating surface and the plane of detection. In the case of an imaging system, two distances need to be considered. The distance between the object surface and the entrance pupil scales the movement of the correlation cells that enter the optical system. Such movements results in permanent decorrelation of the speckle structure and less accuracy in the measurements. The distance between the object surface and the focus plane (the plane conjugate to the detection plane) scales the movement of the speckles on the detector. The movements, therefore, have different signs when the focus plane is placed in front of or behind the object surface, respectively, but the structure of the movement will be the same. This movement may be compensated for when forming phase images to maximize the fringe contrast in the interferograms. A few typical consequences of the results from Sec. 2 are given below. The phase fields shown in the following figures are formed by calculating the phase of , while the speckle movement fields may be generated from image correlation between the fields and , respectively.
Three typical situations often encountered in practical speckle metrology experiments will be discussed. Consider first a setup consisting of a plate oriented parallel to the detector of an imaging system. For simplicity, we will consider unit magnification and illumination along the optical axis of the setup. If we further assume collimated illumination and telecentric imaging, and , respectively. With these choices, the sensitivity vector , where is parallel to the surface normal and . The response in a speckle interferometer due to a deformation of the central point is shown in the left side of Fig. 3. It is seen that the out-of-plane movement has a Gaussian shape that drops most rapidly where the distance between the phase jumps is the smallest. The corresponding speckle movement generation strength (the speckle movement per unit defocus distance) is shown in the right side of Fig. 3.
It is seen that the speckle movement is perpendicular with the phase planes in the left image and that the magnitude is inversely proportional to the distance between the phase planes. The change in sign occurs because of the inversion caused by the imaging. Two things are of general interest in relation to these results. First, the speckle decorrelation caused by movements over the entrance pupil will be most severe in regions with large phase gradients, and hence in regions with dense fringes. This is bad news for phase unwrapping software, which usually needs a certain spatial region without wrapping to perform well. Second, it is seen that the speckle movement in a defocused plane may be used to calculate the phase gradients, provided the distance to the object surface is known. Further, if the phase gradients are known, it is a trivial task to integrate them to get the actual deformation. As detection of an intensity image often is significantly less challenging and less error prone than an interferometric setup analysis, speckle movements in a defocused plane are an attractive and more robust alternative to proper phase measurements in a disturbed environment. One example is an investigation of percussion hole drilling in different metals that was performed a few years ago.13 In the second example, shape measurement with dual-wavelength digital holography is considered. This assumes an optical setup similar to the previous example, but in this case, the object is a diffuse spherical surface. We further assume that the length of the reference arm is tuned such that the zero phase plane coincides with the top surface of the object. Hence, all phase differences due to a change in wave number is relative to this plane. We further assume that speckle fields may be acquired in two different planes separated in depth by a distance . If these two planes are acquired at the same magnification , the difference in speckle movement between these two planes may be expressed asFigure 4 shows the response in phase and relative speckle movement of measurement on a spherical surface with a radius of 2 dm over a area due to a wavelength shift of 1 nm from the 500-nm wavelength. It is seen in the left part of the image that the phase drops more and more rapidly the farther away from the center of the image one moves. The same trend is seen in the right part of Fig. 4, where the difference in speckle movement between two planes separated by a unit distance is shown. Comparing these two images, it can be concluded that the speckles move according to the gradients in the phase fields to size and direction. The reason for using two speckle fields in this case is that for a generally shaped object, the defocus distance will be unknown and speckle movement cannot be transformed directly to a local defocus value unless is known. From the results in Fig. 4, it may be concluded that for a generally shaped object, parts of the measurement field will always be out of focus, generating image plane speckle movements. To avoid decorrelation induced by image plane speckle movements, it is advantageous to place the focus in a plane containing large phase gradients, in this case as far back as possible, corresponding to the outer part of the measurement field. However, that will not prevent the coherence cells to move in the entrance pupil plane and fringe contrast might become poor on steep slopes anyway. This problem may be solved only with multiple recordings involving a set of small wavelength shifts and/or a set of different illumination directions. The last example involves the same setup as before, but in this case, the illumination makes a 45-deg angle to the optical axis in the plane. As in the first example, the object is a plate placed in parallel with the focus plane.
Because of the inclination of the illumination direction, the sensitivity vector becomes , expressed in the coordinate system defined by the orientation of the detector. The projection onto the object surface is given by the first two components. If the plate is rotated 0.1 mrad around the optical axis, with a center of rotation in the middle of the field of view, the phase field shown in the left part of Fig. 5 is obtained. The recording conditions are the same as in the second example. As the sensitivity vector has a component in the plane of the plate the in-plane component corresponding to this component will generate a phase difference in the interferogram. We see that the phase only varies in the -direction as expected. For the speckle movements shown in the right part of Fig. 5, two effects are blended. The bulk movement of the surface will generate a rotational pattern centered in the middle of the image. But because of the phase gradient, the center of rotation will move downward in the direction of the phase gradient. In the case shown in Fig. 5, a defocus of 3 cm has been assumed. This last example highlights the principal difference between phase and speckle movements. The speckles will move according to the bulk movement of the surface, as well as according to the phase gradients geared by the defocus distance, while the phase field only carries information about phase variations. Hence, if the speckle movement is detected in two different focus planes and subtracted, the phase gradient field may be reconstructed, but generating the speckle movement field from the phase requires a multitude of different sensitivity vectors.
In this article, the theory of dynamic speckles in reflection geometry has been reviewed. The object under consideration is allowed to have a general shape, but it should be diffuse and essentially a surface scatterer. It is then showed that the phase in a speckle pattern in general changes because of changes in the setup in relation to the sensitivity vector of the setup, while the speckle movements have a more complex behavior. The speckle movement can be divided into two distinct parts in principle. One part depends on the movement of the object, and it is independent of defocus in the system and behaves essentially as a bulk motion. The other part depends on local phase gradients along the surface patch of the object scaled by possible defocus. The phase gradients are generated from object deformations and changes in the wave number of the light and scale according to the local surface normal in relation to the sensitivity vector. This part is essentially a redirectional part that sends off a given speckle pattern in a different direction. Thus, the motion becomes dependent of defocus.
The theory have been demonstrated by three typical applications. The first was an out-of-plane bending of a plate produced by a central point source; the second was a dual-wavelength holographic recording of a general shaped object; and the third was an in-plane rotation of a plate with a sensitivity vector having an in-plane component. In all these cases, it is shown that the interferometric phase gradients and the speckle movements behave equally, but also that the speckle movements are influenced by bulk movement of the object.
J. W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena, J. C. Dainty, Ed., pp. 9–75, Springer-Verlag, Berlin Heidelberg (1975).Google Scholar
P. K. Rastogi, “Measurement of static surface displacements, derivative of displacements, and three-dimensional surface shapes—examples of applications to non-destructive testing,” in Digital Speckle Pattern Interferometry and Related Techniques, P. K. Rastogi, Ed., pp. 171–224, Wiley, Chichester (2001).Google Scholar
M. A. SuttonJ.-J. OrteuH. W. Schreier, Image Correlation for Shape, Motion and Deformation Measurements, Springer, New York (2009).Google Scholar
M. Sjodahl, “Digital speckle photography,” in Digital Speckle Pattern Interferometry and Related Techniques, P. K. Rastogi, Ed., pp. 289–336, Wiley, Chichester (2001).Google Scholar
W. Brown, Dynamic Light Scattering: The Method and Some Applications, Clarendon Press, Oxford (1993).Google Scholar
I. Yamaguchi, “Speckle displacement and decorrelation in the diffraction and image fields for small object deformation,” Opt. Acta 28(10), 1359–1376 (1981).OPACAT0030-3909http://dx.doi.org/10.1080/713820454Google Scholar
I. Yamaguchi, “Fringe formation in deformation and vibration measurements using laser light,” in Progress in Optics, XXII, E. Wolf, Ed., pp. 272–340, Elsevier, Amsterdam (1985).Google Scholar
I. YamaguchiA. YamamotoS. Kuwamura, “Speckle decorrelation in surface profilometry by wavelength scanning interferometry,” Appl. Opt. 37(28), 6721–6728 (1998).APOPAI0003-6935http://dx.doi.org/10.1364/AO.37.006721Google Scholar
J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications, Roberts and Company, Englewood, CO (2007).Google Scholar
K. D. Hinsch, “Particle image velocimetry,” in Speckle Metrology, R. S. Sirohi, Ed., pp. 235–324, Marcel Dekker, New York (1993).Google Scholar
A. AnderssonA. RunnemalmM. Sjodahl, “Digital speckle pattern interferometry: fringe retrieval for large in-plane deformations with digital speckle photography,” Appl. Opt. 38(25), 5408–5412 (1999).APOPAI0003-6935http://dx.doi.org/10.1364/AO.38.005408Google Scholar
D. PaganinK. A. Nugent, “Non-interferometric phase imaging with partially-coherent light,” Phys. Rev. Lett. 80(12), 2586–2589 (1998).PRLTAO0031-9007http://dx.doi.org/10.1103/PhysRevLett.80.2586Google Scholar
Mikael Sjodahl received his MSc in mechanical engineering and his PhD in experimental mechanics from the Lulea University of Technology, Sweden, in 1989 and 1995, respectively. He is currently holding the chair of experimental mechanics at the Lulea University of Technology and a professorship at University West, Sweden. He has authored or coauthored over 100 papers in international journals and contributed to two books. His interests include fundamental speckle behavior, coherent optical metrology, nondestructive testing, and multidimensional signal processing.