Open Access
26 August 2018 Multiwavelength surface contouring from phase-coded noisy diffraction patterns: wavelength-division optical setup
Author Affiliations +
Abstract
We propose an algorithm for absolute phase retrieval from multiwavelength noisy phase coded diffraction patterns. A lensless optical system is considered with a set of successive single wavelength experiments (wavelength-division setup). The phase masks are applied for modulation of the multiwavelength object wavefronts. The algorithm uses the forward/backward propagation for coherent light beams and sparsely encoding wavefronts, which leads to the complex-domain block-matching three-dimensional filtering. The key-element of the algorithm is an original aggregation of the multiwavelength object wavefronts for high-dynamic-range absolute phase reconstruction. Simulation tests demonstrate that the developed approach leads to the effective solutions explicitly using the sparsity for noise suppression and high-accuracy object absolute phase reconstruction from noisy data.

1.

Introduction

We consider lensless diffraction imaging where information on the object phase and amplitude in question is given by intensities of phase-coded diffraction patterns. The problem of the complex-valued object reconstruction from intensity measurements is well known as the phase retrieval, where “phase” emphasizes that this missing variable missing in observations defines the principle difficulties of the problem in particular, with respect to holography and interferometry, where reference beams allow to preserve phase information. The phase retrieval is from the class of the challenging illposed problems especially difficult for noisy observations. The phase/observation diversity sufficient for reliable reconstruction of the object phase from the intensity patterns is a crucial moment of the problem. Defocusing and phase modulation are known as effective instruments for solving the problem (e.g., Refs. 12.3.4). Diffraction optical elements are used to design the diffraction phase imaging systems.5

The Gerchberg–Saxton (GS) algorithms6,7 are from the most popular in the field of phase retrieval. These iterative algorithms originally proposed the noiseless data use, alternating projection between the complex-valued object uo and complex-valued wavefronts us at the sensor plane. In these iterations, the amplitudes in us are replaced by square roots of the corresponding items of the measurements. The backprojection of the us to the object uo is modified according to the prior information on the object, e.g., support size and shape, amplitude value, etc. The GS algorithms exist in various modifications.8,9 The review and analysis of this type of the algorithms as well as further developments can be found in Ref. 10.

Many publications concern variational formulations for the GS-type algorithms. In particular, the links between the conventional GS and variational techniques are studied in Refs. 1112.13. Contrary to the intuitively clear heuristic of the GS algorithms, the variational approaches usually have a strong mathematical background including image formation modeling, formulation of the objective function (criterion), and finally going to numerical techniques solving corresponding optimization tasks.

In recent years, there has been an increase in demand for multispectral imaging. Multispectral information by recording object waves with multiple wavelengths that are irradiated from multiwavelength/color light sources helps to analyze and recognize objects, to clarify color and tissue distributions of an object and dramatically improve the quality of imaging. Multiwavelength digital holography has an enhanced ability for three-dimensional (3-D) wide-range shape measurements using multiwavelength phase unwrapping, due to the recording of quantitative phase information with multiple wavelengths. In phase retrieval, the multiwavelength is an effective instrument enabling good phase/data diversity.

The multiwavelength phase retrieval is much less studied as compared with the standard single-wavelength formulation. These works by the principle of measurements can be separated into two groups. In the first one, the absolute phase is estimated from phase measurements obtained in some or another way. This scenario is typical for interferometry/holography where reference beams are applied to reveal the phase information, e.g., Refs. 1415.16.17.17. The phase unwrapping algorithms for two-dimensional (2-D) images with simultaneous processing of multiple noisy complex-exponent observations have been developed based on the maximum-likelihood techniques.18

Another group of the techniques uses amplitudes or intensities (powers) as measurements. These formulations are from the class of the multiwavelength phase retrieval problems, e.g., Refs. 1920.21.22. The Chinese Remainder Theorems provide a class of the methods with a good theoretical background.23 The reformulation of these approaches for more practical scenarios with noisy data and for robust estimation leads to the techniques similar to various forms of the maximum likelihood.24,25

An approach to multiwavelength phase retrieval has been proposed in our recent papers.26,27 The phase retrieval in the wavelength multiplexing (WM) setup is studied in the first of these two papers. A broadband light source radiates all wavelengths (RGB in our tests) simultaneously and a CMOS sensor equipped with a color filter array (CFA) registers spectral measurements. In the second paper,27 we consider a wavelength division (WD) setup with successive and separate three RGB wavelength exposures registered by a broadband CCD sensor without CFA. The algorithms for these two setups are quite different, in particular because the WM algorithm requires interpolation of the RGB data subsampled by CFA for pixels where there are no observations of some of the wavebands. The WD algorithm deals with the observations obtained for all pixels for each of the RGB bands separately.

This paper is a further development of the WD setup27 with more details concerning the approach and the algorithm as well as the simulation study performed for the complex-valued objects and parameters of the experiments different from those in Ref. 26. The main results of this paper can be summarized as an extension of the conference paper27 with demonstration of the efficiency of the proposed algorithm, which is computationally simpler, much more accurate and faster than the algorithm for WM in Ref. 26. The experimental study of the algorithm is restricted to simulation tests.

The algorithms presented in Refs. 26 and 27 as well as in this paper can be treated as a development of the sparse phase and amplitude reconstruction (SPAR) algorithm28 and the maximum likelihood absolute phase reconstruction for multiwavelength observations.18 The paper is organized as follows. The multiwavelength object and image formation modeling as well as the Poissonian noise observations are discussed in Sec. 2. The development of the algorithm for the WD optical setup is a topic of Sec. 3. Simulation tests and results are discussed in Sec. 4.

2.

Image Formation Model

The notation and the image formation model presented in this section follow those in our paper.26 We present these results here for completeness of the presentation and in order to make clearer relations of the problems considered in Ref. 26 and in this paper.

2.1.

Multiwavelength Object and Image Modeling

Let ho(x), xR2, be a profile of the transparent 2-D object to be reconstructed. A coherent multiwavelength light beam generates corresponding complex-valued wavefronts uo,λ=bo,λexp(jφo,λ), λΛ, where Λ is a set of the wavelengths and φo,λ=2πλho(nλnenv) are phase delays corresponding to ho, where nλ is the refractive index of the object and nenv is the refractive index of the environment, for the air nenv=1, what is assumed in our tests.

In the lensless phase retrieval, the problem at hand is a reconstruction of the profile ho(x) from the noisy observations of diffractive patterns, registered at some distance from the object. For the high magnitude variation of ho, the corresponding absolute phases φo,λ take values beyond the basic phase interval [π,π], then only wrapped phases can be obtained from uo,λ for separate λ.

In what follows, we apply dimensionless relative frequencies μλ=λ(nλ1)λ(nλ1), which replace common notation for object wavefront as

Eq. (1)

uo,λ=bo,λexp(jμλφo),λΛ,
where uo,λ(x)C2, φo(x)R2 is the object absolute phase φo=2πλho(nλ1), and Λ=[λo,λ1,,λnλ1] is the set of the wavelengths.

Here λΛ is a reference wavelength and φo is an absolute phase corresponding to this wavelength. The parameter μλ establishes a link between the absolute phase φo and wrapped phase ψo,λ of uo,λ, which can be measured at the λ-channel. The wrapped phase is related with the true absolute phase, φo, as μλφo=ψo,λ+2πkλ, where kλ is an integer, ψo,λ[π,π). The link between the absolute and wrapped phase conventionally is installed by the wrapping operator W(·) as follows:

Eq. (2)

ψo,λ=W(μλφo)mod(μλφo+π,2π)π.
W(·) decomposes the absolute phase μλφo into two parts: the fractional part ψo,λ and the integer part defined as 2πkλ.

The image and observation modeling are defined as

Eq. (3)

us,λ=Ps,λ,d{uo,λ},

Eq. (4)

ys,λ=|us,λ|2,

Eq. (5)

zs,λ=G{ys,λ},s=1,,S,λΛ,
where us,λ is a wavefront propagated to the sensor plane, Ps,λ,d{} is an image (diffraction pattern) formation operator, i.e., the propagation operator from the object to the sensor plane, including in particular random phase masks used for wavefront modulation, d is a propagation distance, ys,λ is the intensity of the wavefront at the sensor plane, and zs,λ are noisy observations as defined by the generator G{} of the random variables corresponding to ys,λ, and S is a number of experiments.

The considered multiwavelength phase retrieval problem consists of reconstruction of φo and bo,λ from the observation zs,λ provided that μλ, Ps,λ,d{}, and G{} are known. If the phases μλφo[π,π), the problem is trivial as estimates of μλφo and bo,λ can be found processing data separately for each λ by any phase retrieval algorithm applicable for complex-valued object, in particular by the SPAR algorithm.29 However, if the phases μλφo go beyond the range [π,π), the problem becomes nontrivial and challenging since a phase unwrapping must be embedded in the multiwavelength phase retrieval.

We are focused on the reconstruction of the absolute phase φo from a set of single wavelength experiments produced in a successive manner, which are registered by a wide-band gray-scale CCD sensor, instead of using all wavelengths simultaneously with CFA and demosaicing processing as it is done in the WM algorithm derived in Ref. 26.

The proposed WD algorithm demonstrates a faster convergence than the WM algorithm,26 but makes the experimental realization more complex as it requires separate experiments and measurements for each wavelength. A single wavelength is used for each experiment and the obtained observations {zs,λ} are used jointly for reconstruction of φo(x) and ho(x).

Note also that the proposed approach assumes that the refractive indexes of the object are known for the considered set of the wavelengths. Only in this case, the problem of the absolute phase retrieval and contouring can be resolved.

2.2.

Noisy Observation

For noise simulation, we use independent Poisson random variables since a measurement process in optics amounts to count the photons hitting the sensor’s elements (e.g., Ref. 30). The presented form of the noisy observations corresponds to Refs. 26 and 27.

The probability that a random Poissonian variable zs,λ(x) of the mean value ys,λ(x) takes a given non-negative integer value k, is given as

Eq. (6)

p[zs,λ(x)=k]=exp[ys,λ(x)χ][ys,λ(x)χ]kk!,
where ys,λ(x) is the intensity of the wavefront at the pixel x. The parameter χ>0 in Eq. (6) can be interpreted as an exposure time and as sensitivity of the sensor with respect to the input radiation. Defining the observation signal-to-noise ratio (SNR) as the ratio between the square of the mean and variance of zs,λ(x), we have SNR=E2{zs,λ(x)}/var{zs,λ(x)}=ys,λ(x)χ. Thus, the relative noisiness of observations becomes stronger as χ0(SNR0) and approaches zero when χ(SNR). The latter case corresponds to the noiseless scenario: zs,λ(x)/χys,λ(x) with the probability equal to 1.

Another useful noise characteristic is the mean value of photons per pixel. It is defined as Nphoton,λ=xzs,λ(x)/Nsensor, where Nsensor is the number of sensor pixels. As in case of SNR, smaller values of χ lead to smaller Nphoton, i.e., to noisier observations zs,λ(x).

3.

Development of Algorithm

We consider the problem of the absolute phase retrieval as an estimation of the object wavefront uoCn from noisy observations {zs,λ}. The problem is challenging mainly due to the periodic nature of the likelihood function with respect to the phase φo and the nonlinearity of the observation model.

Following Ref. 26 and using the same notation, the maximum likelihood function leads to the criterion function

Eq. (7)

L(us,λ,uo,λ,bo,φo,δλ)=λ,s,xl[zs,λ(x),|us,λ(x)|2]++1γ1λ,sus,λPs,λ,d{uo,λ}22+1γ2λbo.λexp[j(μλ·φo+δλ)]uo,λ22,
where γ1,γ2>0 are regularization parameters, and ·22 stands for the Hadamard norm.

Here we introduce invariant additive errors δλ, which model phase errors for different wavelengths that occur due to uncertainty in the phase retrieval problem with respect to invariant errors in the phase appearing for each wavelength. The criterion Eq. (7) is minimized with respect to us,λ, uo,λ, bo, and φo as well as with respect to the phase-shifts δλ.

The algorithm is composed from the following steps.

  • 1. Minimization with respect to us,λ concerns the first two summands in L. In contrast with minimization for WM in Ref. 26, where the observations are recorded simultaneously for all wavelengths, the step is simplified due to separate processing for each wavelength. The problem is additive on λ and x and, respectively, can be obtained separately for each λ and each x. The corresponding analytical solution is obtained in Ref. 28. This solution defines us,λ as functions of the noisy observation zs,λ and projection Ps,λ,d{uo,λ} of uo,λ on the sensor. The amplitudes of us,λ are updated accordingly to given observations and the phases are preserved.

  • 2. Minimization with respect to uo,λ goes to the last two summands of the criterion. It is a quadratic problem with the solution of the form

    Eq. (8)

    u^o,λ=(sPs,λ,d*Ps,λ,d+Iγ1/γ2)1×[sPs,λ,d*us,λ+γ1/γ2bo,λexp(jμλφo)],
    where I stands for the identity operator.

    Here the operator Ps,λ,d* is Hermitian adjoint for Ps,λ,d. If Ps,λ,d are orthonormal such that sPs,λ*Ps,λ,d is the identity operator, sPs,λ,d*Ps,λ,d=I, then the solution is simplified to the form

    Eq. (9)

    u^o,λ=sPs,λ,d*{us,λ}+γ1/γ2bo,λexp(jμλφo)1+γ1/γ2.

  • 3. Minimization on bo,λ, φo, and δλ (the last summand in the criterion) is the nonlinear least square fitting of the wavelength dependent uo,λ by the object phase φo invariant with respect λ, amplitudes bo,λ, and spatially invariant phase-shifts δλ. The criterion for this problem can be given in the equivalent form as

    Eq. (10)

    L1(bo,λ,φ,δλ)=λbo.λexp[j(μλ·φo)]|uo,λ|exp[j(ψo,λδλ)]22,
    where ψo,λ=angle(uo,λ), i.e., the wrapped phase of uo,λ. In this representation, the phase shifts δλ are addressed to the wrapped phases ψo,λ in order to stress that the complex exponents exp[j(ψo,λ)] can be out-of-phase with exp[j(μλ·φo)] and the variables δλ serve in order to compensation this phase difference and make the phase modeling of the object by exp[j(μλ·φo)] corresponding to the complex exponent exp[j(ψo,λ)].

Minimization of L1, with respect to φo and the phase-shifts δλ, is implemented as minimization on the grids of these variables. The derivation of this algorithm and its details can be seen in Ref. 26, where it was done for the WM optical setup. This algorithm was named absolute phase reconstruction, and it is shown in step 4 of Table 1 with the abbreviation APR.

Table 1

WD phase retrieval algorithm.

Input: {zs,λ},s=1,,S, λΛ
Initialization: uo,λ1, λΛ
Main iterations: t=1,2,,T
1.Forward propagation: us,λt=Psuo,λt,s=1,,S, λΛ
2.Noise suppression and update of us,λt: us,λt+1/2=ϕ1(us,λt,zs,λ)
3.Backward propagation and filtering: uo,λt+1/2=ϕ2(ust+1/2), uo,λt+1=BM3D(uo,λt+1/2)
4.Absolute phase retrieval and filtering: φot+1/2=APR(uo,λt+1), φot+1=BM3D(φot+1/2)
5.Object wavefront update: uo,λt+1=|uo,λt+1|exp(jφot+1μλ), λΛ
Output: φoT+1, uo,λT+1

3.1.

Algorithm’s Implementation

Using the above solutions, the iterative algorithm is developed of the structure shown in Table 1. The initialization by the complex-valued uo,λ1 is obtained from the observations {zs,λ} by the SPAR algorithm28 separately for each wavelength. The main iterations start from the forward propagation (step 1) and follow by the amplitude update for us,λt at step 2. The operator ϕ1 derived in Ref. 28 is defined as us,λt+1/2=ϕ1(us,λt,zs,λ), that means us,λt+1/2=w·us,λt|us,λt|. Here the ratio us,λt|us,λt| denotes that the variables us,λt and us,λt+1/2 have identical phases. The amplitude w is calculated as

Eq. (11)

w=|us,λt|+|us,λt|2+4zs,λγ1(1+γ1χ)2(1+γ1χ).

The backpropagation is realized in step 3, and the operator ϕ2 is defined by Eq. (9). The absolute phase reconstruction from the wrapped phases of uo,λt+1 is produced in step 4 by the APR algorithm.26 The obtained amplitude and phase update uo,λt+1 at step 5.

The steps 3 and 4 are completed by the block-matching 3-D (BM3D) filtering.29 In step 3, it is the filtering of complex-valued uo,λt+1/2 produced separately for the wrapped phase and amplitude of uo,λt+1/2. In step 4, this filtering is applied to the absolute phase φot+1/2. These BM3D filters are derived from the groupwise sparsity priors for the filtered variables. This technique is based on the Nash equilibrium formulation for the phase retrieval instead of the more conventional constrained optimization with a single criterion function as it is in Eq. (7). We do not show here this derivation as it is quite similar to that developed in Ref. 28. The sparsity rationale assumes that there is a transform of image/signal such that it can be represented with a small number of transform coefficients or in a bit different terms with a small number of basic functions.31 This idea is confirmed and supported by the great success of many sparsity-based techniques developed for image/signal processing problems. Within the sparse theory, a family of the BM3D algorithms has been developed where both ideas of grouping similar patches and the transform design are taken into consideration. This type of the algorithms proposed initially for image denoising29 being modified for various problems demonstrates the state-of-the-art performance.32 The details of BM3D as an advanced image filter can be seen in Ref. 29.

In step 3 of the proposed algorithm, uo,λt+1=BM3D(uo,λt+1/2), the BM3D is applied to the complex-valued variables uo,λt+1/2. It is implemented in this paper as independent filtering of amplitude and wrapped phase

Eq. (12)

|uo,λt+1|=BM3Dampl(|uo,λt+1/2|),ψo,λt+1=BM3Dphase(ψo,λt+1/2),
here ψo,λt+1/2=angle(uo,λt+1/2), thus the updated complex-valued uo,λt+1 is calculated as uo,λt+1=|uo,λt+1|exp(j·ψo,λt+1).

In step 4 of the proposed algorithm, φot+1=BM3D(φot+1/2), the BM3D is applied for filtering of the real-valued variable φo. In presented experiments, the parameters of the algorithm are fixed for all tests: γ1=1/χ, where χ is the parameter of the Poissonian distribution, γ1/γ2=0.2. The parameters of BM3D filters can be seen in Ref. 28.

The algorithm presented in Table 1 differs from the WM algorithm26 by steps 2 and 3. The main iterations in Ref. 26 also start from the forward propagation and follow by the amplitude update and the interpolation for the CFA sensor pixels where there are no observations (step 2). In the WD algorithm, there are no updates and interpolation. In step 3 in Ref. 26, the wavefront pixels not given in observations preserve values obtained in step 2 and used for backpropagation. This manipulation with wavefront reconstruction for RGB pixel is not given in observations from the principal difference with the WD algorithm, presented here in Table 1. The other steps of the algorithms for WM and WD, as it is in this paper, concerning filtering and forward- and backward propagation are identical.

Comparing the presented WD algorithm developed for the WD setup with that developed in Ref. 26 for the WM setup, we may note that the faster convergence and better accuracy are obtained for WD, but it requires a larger number of observations, because each experiment gives the data of a single wavelength only, while in WM we obtain the observations for all wavelength simultaneously.

4.

Simulation Tests

4.1.

Proposed Setup

In simulation, we model a lensless optical system (Fig. 1), where a thin transparent phase object is illuminated by monochromatic three color (RGB) coherent light beams from lasers or LEDs. The wavelengths are Λ=[417,532,633]  nm, with the corresponding refractive indexes [1.528, 1.519, 1.515] as taken for BK7 optical glass. The reference wavelength λ=417  nm then the relative frequencies take values μλ=[1,0.7705,0.6425]. The pixel sizes of the CMOS camera and SLM are 1.4 and 5.6  μm, respectively. The distance d between the object and CMOS camera is equal to 5 mm.

Fig. 1

Optical setup. R, G, B lasers,– red, green, and blue light sources; L1,L2, lenses; SLM, spatial light modulator; CMOS, registration camera.

OE_57_8_085105_f001.png

The free propagation of the wavefronts to the sensor is given by the Rayleigh–Sommerfeld model with the transfer function defined through the angular spectrum (AS).33 For the proper numerical AS propagation without aliasing effects, the zero-padding of the object must be applied. Its size is obtained from the inequality34

Eq. (13)

Nλ·dΔx2,
which binds the propagation distance d and number of pixels N in one-dimension of the zero-padded object for a given values of pixel size of the sensor Δx and λ. For the distance d=5  mm, Δx=1.4  μm, λ=633  nm, and object 100×100  pixels the zero-padded object has a support of 1700×1700  pixels.

The intensities of the light beams registered on the sensor are calculated as zs,λ=G{|ASλ,d{Msuo,λ}|2}, s=1,,S, λΛ. Here ASλ,d denotes the AS propagation operator and Ms are the modulation phase masks inserted before the object and pixelated as the object, “∘” stands for the pixelwise multiplication of the object and phase masks. These phase masks enable strong diffraction of the wave-field and are introduced in order to achieve the phase diversity sufficient for reconstruction of the complex-valued object from intensity measurements. As it was described in Ref. 33, we use the Gaussian random phase masks. Thus, in our simulations the propagation operator Ps,λ,d in Eq. (3) is implemented as a combination of the AS propagation ASλ,d and the modulation phase mask Ms.

4.2.

Reconstruction Results

The illustrating reconstructions are presented for two phase objects with the invariant amplitudes equal to 1 and the phases: Gaussian distribution (100×100) and US Air Force (USAF) resolution test-target (64×64). The absolute and wrapped phases of these test-objects are shown in Fig. 2. The Gaussian and USAF phases are very different, the first one has a smooth continuous shape while the second one is discontinuous binary. Both phases are taken with the high peak-value equal to 30π rad, what corresponds to about 30 reference wavelengths λ in variations of the profile ho.

Fig. 2

Wrapped and absolute phases of the investigated objects Gauss and USAF, the reference wavelength λ=λ1.

OE_57_8_085105_f002.png

As a result of this high peak-value of the absolute phases, the corresponding wrapped phases are very complex with fringes overlapping for the Gaussian phase and lack of height information in the USAF test-phase. Due to this complexity, the unwrapping is not possible using single frequency wrapped phases only. We show that the proposed multiwavelength WD algorithm is quite successful and is able to reconstruct the absolute phase even from very noisy data.

We demonstrate the performance of the WD algorithm for the very difficult scenarios with very noisy Poissonian observations. The noisiness of observations is characterized by SNR and by the mean number of photons per sensor pixel, Nphoton.

The accuracy of the object reconstruction is characterized by the relative root-mean-square error (RRMSE) criteria calculated as RMSE divided by the root of the mean square power of the signal

Eq. (14)

RRMSEφ=φ^estφtrue22φtrue22.

Figure 3 shows the performance of the WD algorithm with respect to different noise levels characterized by the parameter χ. The RRMSE curves for Gaussian and USAF phases demonstrate a similar behavior and go down for growing χ numbers, but RRMSE curve for the USAF object goes down more sharply and takes values smaller than 0.1 at χ=20, while RRMSE curve for Gaussian phase takes a close value only at χ=50. Nevertheless, in both cases, the reconstructions are nearly perfect even for very noisy observed data with SNR values as low as 3.8 and 6.5 dB and very small photon numbers Nphoton=0.75 and 1.87, respectively.

Fig. 3

RRMSE and SNR as function of the parameter χ of the Poissonian distribution: solid triangles and dashed diamonds curves (blue in color images) show RRMSE for the Gaussian and USAF objects, respectively (left y-axis); solid circle curve (orange) shows SNR of the observations (right y-axis).

OE_57_8_085105_f003.png

RRMSEs for the Gaussian phase are shown in Fig. 4 as functions of the experiments number S and SNR. Number of experiments S is in range of 1 to 10, and SNR from 1 (noisiest case) to 25 dB (noiseless case). Nearly horizontal (dark blue) areas correspond to high-accuracy reconstructions with small values of RRMSE, for other areas RRMSE values are much higher and the accuracy is not so good. For 50 iterations (left image), the high accuracy can be achieved starting from S=8 even for very noisy data with SNR=4. For the number of experiments smaller than S=4, the algorithm fails to converge even for low noise levels due to a not sufficient diversity of the observed diffraction patters. The smallest number of experiments that can be used for reconstruction is S=4. However, even for S=4 good results can be obtained only after 100 iterations (right image). It follows from Fig. 4 that some improvement in the accuracy can be achieved at the price of the larger number of experiments S and the larger number of iterations.

Fig. 4

Gaussian object, RRMSE as function of SNR and the experiment number S: (a) after 50 and (b) 100 iterations.

OE_57_8_085105_f004.png

In comparison with the WM algorithm26 such small values for RRMSE can be obtained only after a larger number of iterations, about T=300, mainly because of 75%, 50%, and 75% of CFA pixels lacking intensity measurements for red, green, and blue wavelengths, respectively. This information is recovered by the WM algorithm but at the price of the larger number of iterations.

In what follows, we provide the images of the reconstructed absolute phase obtained for S=4 and the iteration number T=100.

Figures 5 and 6 show absolute phase reconstructions (3-D/2-D images) obtained by the WD algorithm and by the SPAR algorithm reconstructing the wrapped phases for the separate wavelengths following by the phase unwrapping by the PUMA phase unwrapping algorithm.18 The conditions of the experiments are: SNR=6.5  dB, Nphoton=1.87 for the Gaussian object and SNR=3.8  dB, Nphoton=0.75 for the USAF object. The WD algorithm demonstrates a strong ability to reconstruct the absolute phases while the single wavelength-based approach completely failed. The accuracy of the WD reconstruction is very high despite a high level of the noise.

Fig. 5

Gaussian phase reconstructions RRMSE=0.0086 for SNR=6.5 dB (Nphoton=1.87), (a) 3-D surfaces and (b) 2-D absolute phases. From left to right: the WD algorithm, and the single wavelength reconstructions for λ1, λ2, and λ3, respectively.

OE_57_8_085105_f005.png

Fig. 6

USAF phase reconstructions RRMSE=0.030 for SNR=3.8 dB (Nphoton=0.75), (a) 3-D surfaces and (b) 2-D absolute phases. From left to right: the WD algorithm, and the single wavelength reconstructions for λ1, λ2, and λ3, respectively.

OE_57_8_085105_f006.png

The wrapped phase pattern for such high peak-value of the object phases is very complex and irregular (see Fig. 2) so that it is not possible to unwrap it by modern 2-D unwrapping algorithms, but the proposed algorithm is able to resolve the problem even for such complex case. Especially it is challenging task to reconstruct objects such as USAF with big differences between adjacent pixels exceeding 2π, but the WD algorithm successfully reconstructs both objects with high reconstruction quality.

For simulations, we use MATLAB R2016b on a computer with 32 GB of RAM and CPU with a 3.40 GHz Intel® CoreTM i7-3770 processor. The computation complexity of the algorithm is characterized by the time required for processing. For 1 iteration, S=4, and 100×100  pixels images, zero-padded to 1700×1700, this time equals to 12 s.

5.

Conclusion

The multiwavelength absolute phase retrieval and surface contouring from noisy intensity observations is considered. The observations are recorded by a broadband monochromatic sensor in the successive manner for each wavelength in the WD optical setup. The maximum likelihood criterion used in the developed variational derivation of the phase retrieval algorithm defines the general intention to reach statistically optimal estimates. The phase retrieval is an ill-posed inverse problem where the observation noise is amplified and transferred to phase and amplitude as variables of optimization. The sparse modeling enables a regularization of this inverse problem and efficient suppression of these random errors by BM3D filtering of phase and amplitude. The efficiency of the developed algorithm is demonstrated by simulation tests for the coded diffraction pattern WD scenario.

Disclosures

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Acknowledgments

This work was supported by the Academy of Finland, Project No. 287150, 2015-2019, Russian Ministry of Education and Science (project within the state mission for institutions of higher education, agreement 3.1893.2017/4.6) and Horizon 2020 TWINN-2015, Grant No. 687328 - HOLO.

References

1. 

P. F. Almoro et al., “Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,” Opt. Lasers Eng., 49 (2), 252 –257 (2011). https://doi.org/10.1016/j.optlaseng.2010.09.012 ACOHE9 1063-5203 Google Scholar

2. 

E. Candes, X. Li and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal., 39 (2), 277 –299 (2013). https://doi.org/10.1016/j.acha.2014.09.004 Google Scholar

3. 

P. F. Almoro et al., “Enhanced intensity variation for multiple-plane phase retrieval using a spatial light modulator as a convenient tunable diffuser,” Opt. Lett., 41 (10), 2161 –2164 (2016). https://doi.org/10.1364/OL.41.002161 OPLEDP 0146-9592 Google Scholar

4. 

L. Camacho et al., “Quantitative phase microscopy using defocusing by means of a spatial light modulator,” Opt. Express, 18 (7), 6755 –6766 (2010). https://doi.org/10.1364/OE.18.006755 OPEXFF 1094-4087 Google Scholar

5. 

B. C. Kress and P. Meyrueis, Applied Digital Optics: From Micro-Optics to Nanophotonics, John Wiley & Sons Ltd., United Kingdom (2009). Google Scholar

6. 

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Phys. E. ppl. Opt. Opt., 2 (352), 237 –246 (1969). Google Scholar

7. 

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt., 21 (15), 2758 –2769 (1982). https://doi.org/10.1364/AO.21.002758 APOPAI 0003-6935 Google Scholar

8. 

I. A. Shevkunov, N. S. Balbekin and N. V. Petrov, “Comparison of digital holography and iterative phase retrieval methods for wavefront reconstruction,” Proc. SPIE, 9271 927128 (2014). https://doi.org/10.1117/12.2071469 PSISDG 0277-786X Google Scholar

9. 

C. Gaur, B. Mohan and K. Khare, “Sparsity-assisted solution to the twin image problem in phase retrieval,” J. Opt. Soc. Am. A, 32 (11), 1922 –1927 (2015). https://doi.org/10.1364/JOSAA.32.001922 JOAOD6 0740-3232 Google Scholar

10. 

C. Guo, S. Liu and J. T. Sheridan, “Iterative phase retrieval algorithms I: optimization,” Appl. Opt., 54 (15), 4698 –4708 (2015). https://doi.org/10.1364/AO.54.004698 APOPAI 0003-6935 Google Scholar

11. 

J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt., 32 (10), 1737 –1746 (1993). https://doi.org/10.1364/AO.32.001737 APOPAI 0003-6935 Google Scholar

12. 

H. H. Bauschke, P. L. Combettes and D. R. Luke, “Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization,” J. Opt. Soc. Am. A. Opt. Image Sci. Vis., 19 (7), 1334 –1345 (2002). https://doi.org/10.1364/JOSAA.19.001334 Google Scholar

13. 

E. J. R. Pauwels et al., “On Fienup methods for sparse phase retrieval,” IEEE Trans. Signal Process., 66 (4), 982 –991 (2018). https://doi.org/10.1109/TSP.2017.2780044 ITPRED 1053-587X Google Scholar

14. 

C. Falldorf et al., “Reduction of speckle noise in multiwavelength contouring,” Appl. Opt., 51 (34), 8211 –8215 (2012). https://doi.org/10.1364/AO.51.008211 APOPAI 0003-6935 Google Scholar

15. 

C. Towers, D. Towers and J. Jones, “Absolute fringe order calculation using optimised multi-frequency selection in full-field profilometry,” Opt. Lasers Eng., 43 (7), 788 –800 (2005). https://doi.org/10.1016/j.optlaseng.2004.08.005 Google Scholar

16. 

T. Tahara et al., “Multiwavelength digital holography and phase-shifting interferometry selectively extracting wavelength information: phase-division multiplexing (PDM) of wavelengths,” Holographic Materials and Optical Systems, Ch. 09 IntechOpen Limited, United Kingdom (2017). Google Scholar

17. 

D. Claus, G. Pedrini and W. Osten, “Iterative phase retrieval based on variable wavefront curvature,” Appl. Opt., 56 (13), F134 –F137 (2017). https://doi.org/10.1364/AO.56.00F134 APOPAI 0003-6935 Google Scholar

18. 

J. Bioucas-Dias and G. Valadão, “Multifrequency absolute phase estimation via graph cuts,” in European Signal Processing Conf., 1389 –1393 (2009). Google Scholar

19. 

P. Bao et al., “Phase retrieval using multiple illumination wavelengths,” Opt. Lett., 33 (4), 309 –311 (2008). https://doi.org/10.1364/OL.33.000309 OPLEDP 0146-9592 Google Scholar

20. 

N. V. Petrov, V. G. Bespalov and A. A. Gorodetsky, “Phase retrieval method for multiple wavelength speckle patterns,” Proc. SPIE, 7387 73871T (2010). https://doi.org/10.1117/12.871433 PSISDG 0277-786X Google Scholar

21. 

N. V. Petrov et al., “Image reconstruction using measurements in volume speckle fields formed by different wavelengths,” Proc. SPIE, 7907 790718 (2011). https://doi.org/10.1117/12.876151 PSISDG 0277-786X Google Scholar

22. 

M. Sanz et al., “Improved quantitative phase imaging in lensless microscopy by single-shot multi-wavelength illumination using a fast convergence algorithm,” Opt. Express, 23 (16), 21352 –21365 (2015). https://doi.org/10.1364/OE.23.021352 OPEXFF 1094-4087 Google Scholar

23. 

K. Falaggis, D. P. Towers and C. E. Towers, “Algebraic solution for phase unwrapping problems in multiwavelength interferometry,” Appl. Opt., 53 (17), 3737 –3747 (2014). https://doi.org/10.1364/AO.53.003737 APOPAI 0003-6935 Google Scholar

24. 

W. Wang et al., “Maximum likelihood estimation based robust Chinese remainder theorem for real numbers and its fast algorithm,” IEEE Trans. Signal Process., 63 (13), 3317 –3331 (2015). https://doi.org/10.1109/TSP.2015.2413378 ITPRED 1053-587X Google Scholar

25. 

L. Xiao, X.-G. Xia and H. Huo, “Towards robustness in residue number systems,” IEEE Trans. Signal Process., 65 (6), 1497 –1510 (2017). https://doi.org/10.1109/TSP.2016.2641398 ITPRED 1053-587X Google Scholar

26. 

V. Katkovnik et al., “Multiwavelength absolute phase retrieval from noisy diffractive patterns: wavelength multiplexing algorithm,” Appl. Sci., 8 (5), 719 –738 (2018). https://doi.org/10.3390/app8050719 Google Scholar

27. 

V. Y. Katkovnik et al., “Multiwavelength surface contouring from phase-coded diffraction patterns,” Proc. SPIE, 10677 106771B (2018). https://doi.org/10.1117/12.2306127 PSISDG 0277-786X Google Scholar

28. 

V. Katkovnik, “Phase retrieval from noisy data based on sparse approximation of object phase and amplitude,” (2017). Google Scholar

29. 

K. Dabov et al., “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process., 16 (8), 2080 –2095 (2007). https://doi.org/10.1109/TIP.2007.901238 IIPRE4 1057-7149 Google Scholar

30. 

F. Soulez et al., “Proximity operators for phase retrieval,” Appl. Opt., 55 (26), 7412 –7421 (2016). https://doi.org/10.1364/AO.55.007412 APOPAI 0003-6935 Google Scholar

31. 

S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. Mach. Intell., 11 (7), 674 –693 (1989). https://doi.org/10.1109/34.192463 ITPIDJ 0162-8828 Google Scholar

32. 

C. A. Metzler, A. Maleki and R. G. Baraniuk, “BM3D-PRGAMP: compressive phase retrieval based on BM3D denoising,” in IEEE Int. Conf. on Image Processing, 2504 –2508 (2016). Google Scholar

33. 

V. Katkovnik et al., “Computational super-resolution phase retrieval from multiple phase-coded diffraction patterns: simulation study and experiments,” Optica, 4 (7), 786 –794 (2017). https://doi.org/10.1364/OPTICA.4.000786 Google Scholar

34. 

N. V. Petrov, M. V. Volkov and V. G. Bespalov, “Phase retrieval of THz radiation using set of 2D spatial intensity measurements with different wavelengths,” Proc. SPIE, 8281 82810J (2012). https://doi.org/10.1117/12.909888 PSISDG 0277-786X Google Scholar

Biography

Vladimir Katkovnik is a senior research fellow of Tampere University of Technology. His research interests include stochastic signal processing, linear and nonlinear filtering, nonparametric estimation, imaging, nonstationary systems, and time-frequency analysis. He published six books and over 300 papers. He is a member of SPIE.

Igor Shevkunov received his PhD in 2013 and is the author of more than 30 papers. His main research interests are digital holography, interferometry, atomic and molecular physics, and lasers.

Nikolay V. Petrov is a senior researcher and associate professor in the Department of Photonics and Optoinformatics, ITMO University, Russia. He received his PhD in 2011 and is the author of more than 50 papers. His main research interests are holography, interferometry, femtosecond optics, and terahertz imaging.

Karen Egiazarian received his PhD in 1994 and is now a professor in the Department of Signal Processing. He is the author or coauthor of four books and about 600 papers. His research focuses mainly on problems within the fields of applied mathematics and image processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Vladimir Katkovnik, Igor Shevkunov, Nikolay V. Petrov, and Karen Egiazarian "Multiwavelength surface contouring from phase-coded noisy diffraction patterns: wavelength-division optical setup," Optical Engineering 57(8), 085105 (26 August 2018). https://doi.org/10.1117/1.OE.57.8.085105
Received: 1 May 2018; Accepted: 31 July 2018; Published: 26 August 2018
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Reconstruction algorithms

Algorithm development

Phase retrieval

Signal to noise ratio

Diffraction

Phase shift keying

Wavefronts

Back to Top