Coherent diffractive imaging (CDI) is an important tool for reconstructing the complex-valued transmission function of an object from the far-field diffraction pattern and has been widely applied in material and biological sciences.1,2 Miao et al.3 first experimentally realized imaging of submicrometer sized noncrystalline specimen using CDI. Many CDI approaches have been developed in the past decades; they can be divided into two types: the iterative methods188.8.131.52.–9 and the noniterative methods.1011.12.13.–14 However, most of the traditional CDI methods cannot be directly applied to using spatially partially coherent illumination without a proper modification, which hence have limited applications at short wavelengths, e.g., in the x-ray and electron regime, or in the unstable experimental environment. For example, the degradation of the spatial coherence may also be caused by the mechanical movement of the sample and the experimental setup, or by the fluctuation of the ambient medium, e.g., atmospheric turbulence.1516.–17
Iterative algorithms retrieve the complex-valued transmission function of an object by propagating the field back-and-forth between the object plane and the far-field diffraction plane, and imposing constraints on the field in both planes. Gerchberg and Saxton pioneered the iterative algorithms in 1972 by proposing a straightforward method using two intensities measured in the object plane and in the far-field, respectively.4 Iterative algorithms using only one intensity measurement of the far-field diffraction pattern, as proposed by Fienup,5,6 require prior knowledge of the support of the object for imposing the support constraint in the object plane. Recently, ptychographic algorithms have become an essential technique for imaging nanoscale objects using short-wavelength sources.8 The key feature is that illumination areas at the neighboring shift positions must overlap, and this overlap improves the convergence of the ptychographic algorithms.7,8 Recent developments in ptychography allow for simultaneous reconstruction of the probe. This significantly reduces the complexity of experimental setup compared to other CDI methods.9
For spatially partially coherent illumination, the propagation of light is described using the mutual coherence function (MCF) instead of the field. Many efforts have been spent to adapt CDI methods to spatially partially coherent illumination.15184.108.40.206.220.127.116.11.–24 The modification of the iterative algorithms was first reported by Whitehead et al.15 using mode decomposition of the MCF.16 Later, ptychographic algorithms have also been modified to work for spatially partially coherent illumination by decomposing the MCF into orthogonal modes.1718.19.20.–21 Thibault and Menzel17 proposed a mixed state model from the quantum perspective, which effectively deals with a series of multistate mixing problems including partially coherent illumination and enables more applications of ptychography, such as continuous-scan ptychography18 and dynamic imaging of a vibrating sample.19 Furthermore, ptychographic imaging with the simultaneous presence of both multiple probe and multiple object states was also demonstrated.21 However, the accuracy of mode decomposition relies on the number of modes for accurately representing the MCF, and it increases as the spatial coherence of the illumination decreases.
Compared to iterative methods, noniterative methods10,11,14 do not suffer from issues such as stagnation or nonuniqueness of the solution to the diffractive imaging problem, especially when the illumination becomes spatially partially coherent.21,24 In holographic methods, the field transmitted by the object is perturbed such that the object’s transmission function can be directly extracted from the inverse Fourier transform of the diffraction pattern.14 This perturbation can be achieved by introducing a pinhole, e.g., Fourier transform holography (FTH),2526.–27 or by changing the transmission function at a point of the object, e.g., Zernike quantitative phase imaging.28 The performance of applying holographic methods to spatially partially coherent illumination has been discussed in Ref. 14. Alternative methods extract the object information from the three-dimensional autocorrelation functions obtained by inverse Fourier transforming the three-dimensional data set (e.g., the data set measured by varying focus12 or another optical parameter13).
It has been demonstrated that using noniterative methods can avoid errors due to truncating the number of the modes for representing the MCF.1213.–14 However, the degree of the spatial coherence of the illumination limits the field of view (FOV) of the reconstructed object. To be precise, what is reconstructed is a product of the object’s transmission function and a correlation function of the illumination. This correlation function has a maximum at the perturbation point, and its value decreases at a rate that depends on the degree of spatial coherence, as the distance between the perturbation point and the observation point increases. Therefore, the lower the illumination’s degree of spatial coherence is, the smaller the region of the object that can be reconstructed reliably.
In this paper, we propose a noniterative method based on a pinhole array mask (PAM). We place the PAM between the object and the detector, and we measure the far-field diffraction pattern of the spatially partially coherent field transmitted by the PAM. The PAM consists of a periodic array of measurement pinholes and an extra reference pinhole, which is analogous to the perturbation point in FTH.14 It forms an interference between fields transmitted by the reference pinhole and by the measurement pinholes, and we can directly retrieve the correlation function of the incident light with respect to the reference pinhole at the locations of the measurement pinholes from the interference pattern.
In FTH, since the reference pinhole is placed far from the measurement window, the FOV is rather small due to the low correlation. Our method is advantageous compared to FTH, because splitting the measurement window into a periodic array of measurement pinholes keeps the reference pinhole close to all measurement pinholes and thus maintains a high correlation that results in a large FOV.
Our method places the object at a certain distance before the PAM, instead of superposing the object with the PAM. In practice, this not only offers flexibility for arranging the experimental setup but also allows us to adjust the sampling of the reconstructed object. When the propagation distance satisfies the condition for Fresnel or Fraunhofer diffraction, an object with finite support can be reconstructed from the retrieved correlation function, and its sampling is related to the sampling of the PAM by the Shannon–Nyquist sampling theorem.
Because our method reconstructs the product of the object’s transmission function and the illumination’s correlation function and hence can be used not only for object reconstruction but also for characterization of the spatial coherence structure, it is useful for a broad range of applications in coherent optics, such as the measurement of optical coherence singularity29,30 and free-space optical communication through turbulent media.31,32
The schematic plot of the experimental setup of our method is shown in Fig. 1, which shows that a transmissive object is illuminated by spatially partially coherent light. In our method, unlike traditional CDI algorithms, we place a PAM between the object and the detector. The location of the PAM is chosen such that light propagation from the object to the PAM and from the PAM to detector obeys either Fresnel or Fraunhofer propagation and hence can be described using Fourier transforms. By doing so, we can divide our method into two steps: (1) retrieving the correlation function of incident light in the PAM plane using a noniterative approach and (2) reconstructing the product of the object’s transmission function and the illumination’s correlation function using a differential method, which requires two diffraction patterns corresponding to the object with and without transmission perturbation, respectively. It is worth noting that the use of this differential method is not necessary for completely spatially coherent illumination.
Retrieving the Correlation Function in the PAM Plane
Let the coordinate of the PAM plane be denoted by . The PAM consists of a reference pinhole at the origin shown by the gray square in Fig. 1(b), and a periodic array of measurement pinholes around the reference pinhole is shown by the white squares in Fig. 1(b). The center of the periodic array is shifted relative to the reference pinhole by certain offset and is depicted by the white spot at the corner of the reference pinhole. We assume that the reference pinhole and the measurement pinholes are identical and are so small that each pinhole can be approximated by a delta function. This assumption allows us to write the transmission function of the PAM by
The incident light transmitted by the PAM generates a diffraction pattern in the detector plane. We denote the MCF of the incident light in the PAM plane by , which describes the correlation between the fields at and . Because the light propagation from the PAM to the detector satisfies the condition for either Fresnel or Fraunhofer propagation, we can express the diffraction pattern measured by the detector using the Fourier transform as follows:
1. The first term is , which is a constant multiplied by a delta function that appears only at the origin of the coordinate system .
2. The second term is located on the periodic array defined by , which is depicted by the gray squares in Fig. 1(c). This periodic array has the same pitch as the periodic array of the measurement pinholes but zero offset relative to the origin.
3. The third term is located on the periodic array defined by , which is depicted by the blue squares in Fig. 1(c).
4. The fourth term is located on the periodic array defined by , which is depicted by the red squares in Fig. 1(c).
The role of the reference pinhole of the PAM is analogous to the perturbation point in FTH,14 namely to create interference between the incident light transmitted by the reference pinhole and by the measurement pinholes. The interference induces the third term and the fourth term in Eq. (3), which are referred to as the “interference terms.” The first term and the second term in Eq. (3) are the autocorrelation of the reference pinhole and of the measurement pinholes, respectively, and hence are referred to as the “autocorrelation terms.”
The layout of the four terms of Eq. (3) is illustrated by the schematic plot in Fig. 1(c). Figure 1(c) shows that the periodic arrays of the autocorrelation term (the gray squares) and the two interference terms (the blue squares and the red squares) have the same pitch but different offset. This allows us to separate the two interference terms and the autocorrelation term by multiplying a spatial filter to the inverse Fourier transform of the diffraction pattern. The expression for this spatial filter is given by
Note that is a function of the locations of the measurement pinholes on the periodic array defined by . The sampling interval of is given by the pitch of the PAM . For rectangular pinholes with size , the sampling interval should be such that the autocorrelation term and the two interference terms do not overlap as shown in Fig. 1(c). However, this sampling interval is usually larger than the diffraction-limited sampling interval according to the Shannon–Nyquist criterion: (, ), where is the size of the detector.
Reconstructing the Transmission Function of the Object and the Correlation Function of the Illumination
The sampling interval of the reconstructed object will be higher in the plane at distance before the PAM plane than exactly in the PAM plane. We denote the coordinate of the object plane by , which is original sampling grid normalized by , and the complex-valued transmission function of the object by . In our method, the MCF in the object plane and the MCF in the PAM plane are related to the Fourier transform in the case of either Fresnel or Fraunhofer propagation. Here we give the example of Fraunhofer propagation as follows:
In Eq. (5), by setting and , we can obtain an expression for computing the retrieved correlation function in the PAM plane as follows:Fig. 1(a), we can reconstruct the modulated object’s transmission function .
We shall note that the modulation depends on not only the MCF of the illumination but also the transmission function of the object . To eliminate this modulation, we use the differential approach. This approach requires two measurements: one with a point perturbation to the transmission function of the object at and the other without perturbing the object. The perturbation is achieved by the change of either the amplitude or the phase of the transmission for inside the object or by introducing an extra spot that lets light pass through for outside the object. It is worth mentioning that for completely spatially coherent illumination, the MCF of the illumination, , becomes a constant, and there is no need for using the differential approach to eliminate the modulation .
Substituting the transmission function of the perturbed object , where is a complex-valued constant representing the perturbation, into Eq. (6), we obtain
Results and Discussions
In the experiment, we use the GSM beam and the Laguerre–Gaussian–Schell-model (LGSM) beam as illumination to validate our method. In the object plane, the MCF of the GSM beam can be expressed asFig. 2. We expand a coherent laser beam at wavelength using a beam expander (BE) and then focus it on a rotating ground-glass disk (RGGD) using a lens L1. Because the focal spot follows a Gaussian distribution, the spatially partially coherent light generated due to the scattering by the RGGD satisfies Gaussian statistics, namely, the correlation between the fields in any pair of points follows a Gaussian distribution. We then collimate the spatially partially coherent light by lens L2. By passing the collimated beam through a Gaussian amplitude filter (GAF), we can obtain the GSM beam whose intensity distribution also follows the Gaussian distribution. The MCF of the LGSM beam in the object plane is described by 33 and 34. In the experimental setup shown in Fig. 2, we need to insert a spiral phase plate between the BE and the focusing lens L1, which produces a dark hollow focal spot on the RGGD. The order of the LGSM beam is determined by the topological charge of the spiral phase plate. When , the spiral phase plate has a constant phase and the LGSM beam becomes the GSM beam. However, when , the MCFs of the LGSM beam and the GSM beam have the same amplitude but different phases.
In the experiment, the beam width of the Gaussian distribution of the intensity distribution is determined by the GAF and is set to be 0.85 mm, whereas the coherence width of the Gaussian distribution of the correlation function, also known as the degree of coherence, is determined by the size of the focal spot on the RGGD. We can control by translating back-and-forth the focusing lens L1, which determines the size of the focal spot. The degree of spatial coherence is calibrated using the method proposed in Refs. 3536.–37.
For the diffractive imaging experiment, we use a phase object, whose amplitude is flat and phase has a binary distribution ( and ) in the shape of a panda, displayed on the phase spatial light modulator (SLM) (Pluto, Holoeye Inc., with resolution size , and frame rate 60 Hz). The pixels inside the support of the object reflect the incident beam back to the beam splitter, whereas the pixels outside the support direct the incident beam to other directions. The beam reflected by the SLM propagates to the PAM. Finally, we measure the far-field diffraction pattern of the light transmitted by the PAM using a charge-coupled device (CCD) (Eco445MVGe, SVS-Vistek Inc. with resolution size , and frame rate 30 fps) camera, which is placed at the focal plane of the Fourier transform lens L3. In the experiment, we set the pitch of the PAM and the size of the pinhole to be and , respectively. The object size is , which requires the propagation distance between the object and the PAM to be .
Experimental Results Using GSM Beam Illumination
Equation (12) indicates that for spatially partially coherent illumination, to reconstruct the product of the object’s transmission function and the illumination’s correlation function , our method needs two measurements of the diffraction pattern, one without the perturbation and the other with the perturbation of the object’s transmission at . describes the correlation between the fields at the perturbation point and other points , which decreases as the distance between and increases. As a consequence, the reconstructed object’s transmission function has a limited FOV since the cannot be reconstructed at locations, where is corrupted by the noise.
In the experiment, we place the perturbation point at the head of the panda by . We show the object’s transmission function with and without the perturbation point in Figs. 3(a) and 3(b), and the amplitude and the phase of the reconstructed product for various degrees of spatial coherence in Figs. 3(c1)–3(c3) and 3(d1)–3(d3). Because the MCF of the GSM beam has a uniform phase, the phase of the reconstructed product is only given by the phase of the object. The amplitude of the reconstructed product follows the Gaussian distribution of the MCF of the GSM beam. The panda shape in the amplitude is due to the discontinuity of the phase and low-pass filtering. As mentioned in Ref. 38, it is the phase jump between the inside and outside area that enables destructive interference along the outline, therefore, leading to the observation of the dark panda shape in a very bright background. In addition, the finite boundaries of PAM, Fourier transform lens (L3), and CCD constitute low-pass filters, which result in the disappearance of the panda contour that corresponds to high spatial frequencies.21 We can observe in Fig. 3 that for a lower degree of spatial coherence , the amplitude of the correlation function decreases more rapidly as the distance increases, and hence the FOV of object’s transmission function is smaller.
Figure 3 shows that, to increase the FOV, we can either increase the degree of spatial coherence or decrease the noise level. In Fig. 4, we demonstrate that using more than only one perturbation point, placed at different locations of the object, the object’s transmission function can still be reconstructed in the whole FOV in the case of the lowest degree of spatial coherence (). This requires us to repeat the measurement and the reconstruction procedure for each perturbation point to reconstruct different parts of the object. By combining the different parts reconstructed using low illumination together, we can obtain the object’s transmission function as if using high illumination.
Experimental Results Using LGSM Beam Illumination
In Figs. 3 and 4, the phase of the reconstructed product is given by only the object since the MCF of the GSM beam has a uniform phase. However, for illumination using LGSM beam, the phase of its MCF is not uniform. Therefore, we need to calibrate so that we can divide the reconstructed product by to obtain the object’s transmission function alone. We show the amplitude and the phase of the reconstructed product using LGSM illumination beam in Fig. 5(a). Compared to the reconstructed results using GSM illumination beam in Figs. 3 and 4, we can see that now the phase of the object’s transmission function is modulated by the phase of the correlation function of the LGSM beam, and we cannot see the panda in the phase of the reconstructed product. We show the amplitude and the phase of the correlation function calibrated using an empty window as the object in Fig. 5(b). In Fig. 5(c), we demonstrate the object’s transmission function obtained by dividing the reconstructed product by the calibrated . The panda in the phase of the reconstructed object can clearly be seen. This example verifies that our method can be applied to object reconstruction in cases using an illumination beam whose MCF is not known prior.
In summary, we develop and validate a noniterative method to reconstruct the complex-valued transmission function of an object illuminated by spatially partially coherent beam using a PAM placed between the object and the detector. Our method overcomes several challenges of conventional iterative CDI algorithms and holographic methods. In particular, our method does not depend on the mode decomposition of the MCF of the spatially partially coherent light and has the freedom to choose the location of the point where the transmission function of the object is perturbed, which is particularly beneficial for achieving large FOV when using a low degree of spatial coherence illumination. Moreover, we also demonstrate that our method can be used to calibrate the MCF of an arbitrary spatially partially coherent beam. This calibration allows us to reconstruct the object’s transmission function almost as accurately as if using complete illumination. The calibration itself can also be used for spatial coherence property characterization, which is needed as an approach for applications like the measurement of optical coherence singularity.29,30 Therefore, in addition to diffractive imaging, our method also provides an approach. Finally, our method is wavelength independent and can be applied to a wide range of wavelengths, from x-rays to infrared light.
This work was supported by the National Natural Science Foundation of China (Nos. 11774250 and 91750201), the National Natural Science Fund for Distinguished Young Scholars (No. 11525418), and the sponsorship of Jiangsu Overseas Research and Training Program for Prominent Young and Middle-aged University Teachers and Presidents. This work is also part of the research program “Novel design shapes for complex optical systems,” with Project No. 12797, which is (partly) financed by the Netherlands Organization for Scientific Research (NWO).
Xingyuan Lu received both her bachelor’s and master’s degree in physics from Soochow University, China, studying the light-field manipulation and measurement. Currently, she is a PhD candidate at Laboratory of Light Manipulation, Soochow University. Her research focuses on the combination of light-field manipulation and quantitative phase imaging.
Yifeng Shao received his bachelor’s degree from Sun Yat-sen University (Guangzhou, China). Later, he was enrolled by the Erasmus Mundus master’s programme and studied at the Institut d’optique Graduate School (Palaiseau, France) in 2012, and at Delft University of Technology (Delft, Netherlands) in 2013. In September 2013, he became a PhD candidate in the Optics Research Group at TUD. His research topics include optical design, aberration measurement techniques, and spatial coherence effect for imaging applications.
Chengliang Zhao is the cofounder of the Laboratory of Light Manipulation at Soochow University. He received his PhD in physics from Zhejiang University. His research interests include coherent optics, diffractive imaging, phase retrieval, and optical tweezers.
Sander Konijnenberg holds his BSc degree in applied mathematics and applied physics and his MSc degree in applied physics from Delft University of Technology (TUD), Netherlands. Currently, he is doing a PhD in optics at TUD on the topic of computational imaging.
Xinlei Zhu is a graduate student at School of Physical Science and Technology, Soochow University, Suzhou, China. Currently, she is working toward a PhD in light manipulation. Her research interests include the generation of light sources and computer graphics.
Ying Tang is a PhD researcher at Delft University of Technology, Netherlands. He received both his bachelor’s and master’s degree in optical engineering from Huazhong University of Science and Technology, China. During his master, he worked on biomedical imaging. Starting from 2013, he joint Optics Research Group of TUD and work on various topics such as meta-materials, optical communication, optical tweezer, and optical metrology.
Yangjian Cai is a professor at School of Physical Science and Technology, Soochow University, China. He received his BSc degree in physics from Zhejiang University, his PhD in physics from Zhejiang University, and his PhD in electromagnetic theory. His research interests include optical coherence and polarization, propagation, optical imaging, particle trapping, and turbulent atmosphere. He has published over 290 papers in refereed international journals, and he is a topical editor for JOSA A.
H. Paul Urbach is a professor and leader at the Optics Research Group of TUD. His current research interests are electromagnetic optics, improving resolution using phase and polarization sensitive illumination and detection and the use of new materials in optics. He has been president of the European Optical Society from 2014 to 2016 and from 2017 to 2018. He is a member of the board of ICO, scientific director of the Dutch Optics Centre, and member of the board of PhotonicsNL.