## 1.

## Introduction

We consider lensless diffraction imaging where information on the object phase and amplitude in question is given by intensities of phase-coded diffraction patterns. The problem of the complex-valued object reconstruction from intensity measurements is well known as the phase retrieval, where “phase” emphasizes that this missing variable missing in observations defines the principle difficulties of the problem in particular, with respect to holography and interferometry, where reference beams allow to preserve phase information. The phase retrieval is from the class of the challenging illposed problems especially difficult for noisy observations. The phase/observation diversity sufficient for reliable reconstruction of the object phase from the intensity patterns is a crucial moment of the problem. Defocusing and phase modulation are known as effective instruments for solving the problem (e.g., Refs. 12.3.–4). Diffraction optical elements are used to design the diffraction phase imaging systems.^{5}

The Gerchberg–Saxton (GS) algorithms^{6}^{,}^{7} are from the most popular in the field of phase retrieval. These iterative algorithms originally proposed the noiseless data use, alternating projection between the complex-valued object ${u}_{o}$ and complex-valued wavefronts ${u}_{s}$ at the sensor plane. In these iterations, the amplitudes in ${u}_{s}$ are replaced by square roots of the corresponding items of the measurements. The backprojection of the ${u}_{s}$ to the object ${u}_{o}$ is modified according to the prior information on the object, e.g., support size and shape, amplitude value, etc. The GS algorithms exist in various modifications.^{8}^{,}^{9} The review and analysis of this type of the algorithms as well as further developments can be found in Ref. 10.

Many publications concern variational formulations for the GS-type algorithms. In particular, the links between the conventional GS and variational techniques are studied in Refs. 1112.–13. Contrary to the intuitively clear heuristic of the GS algorithms, the variational approaches usually have a strong mathematical background including image formation modeling, formulation of the objective function (criterion), and finally going to numerical techniques solving corresponding optimization tasks.

In recent years, there has been an increase in demand for multispectral imaging. Multispectral information by recording object waves with multiple wavelengths that are irradiated from multiwavelength/color light sources helps to analyze and recognize objects, to clarify color and tissue distributions of an object and dramatically improve the quality of imaging. Multiwavelength digital holography has an enhanced ability for three-dimensional (3-D) wide-range shape measurements using multiwavelength phase unwrapping, due to the recording of quantitative phase information with multiple wavelengths. In phase retrieval, the multiwavelength is an effective instrument enabling good phase/data diversity.

The multiwavelength phase retrieval is much less studied as compared with the standard single-wavelength formulation. These works by the principle of measurements can be separated into two groups. In the first one, the absolute phase is estimated from phase measurements obtained in some or another way. This scenario is typical for interferometry/holography where reference beams are applied to reveal the phase information, e.g., Refs. 1415.16.17.–17. The phase unwrapping algorithms for two-dimensional (2-D) images with simultaneous processing of multiple noisy complex-exponent observations have been developed based on the maximum-likelihood techniques.^{18}

Another group of the techniques uses amplitudes or intensities (powers) as measurements. These formulations are from the class of the multiwavelength phase retrieval problems, e.g., Refs. 1920.21.–22. The Chinese Remainder Theorems provide a class of the methods with a good theoretical background.^{23} The reformulation of these approaches for more practical scenarios with noisy data and for robust estimation leads to the techniques similar to various forms of the maximum likelihood.^{24}^{,}^{25}

An approach to multiwavelength phase retrieval has been proposed in our recent papers.^{26}^{,}^{27} The phase retrieval in the wavelength multiplexing (WM) setup is studied in the first of these two papers. A broadband light source radiates all wavelengths (RGB in our tests) simultaneously and a CMOS sensor equipped with a color filter array (CFA) registers spectral measurements. In the second paper,^{27} we consider a wavelength division (WD) setup with successive and separate three RGB wavelength exposures registered by a broadband CCD sensor without CFA. The algorithms for these two setups are quite different, in particular because the WM algorithm requires interpolation of the RGB data subsampled by CFA for pixels where there are no observations of some of the wavebands. The WD algorithm deals with the observations obtained for all pixels for each of the RGB bands separately.

This paper is a further development of the WD setup^{27} with more details concerning the approach and the algorithm as well as the simulation study performed for the complex-valued objects and parameters of the experiments different from those in Ref. 26. The main results of this paper can be summarized as an extension of the conference paper^{27} with demonstration of the efficiency of the proposed algorithm, which is computationally simpler, much more accurate and faster than the algorithm for WM in Ref. 26. The experimental study of the algorithm is restricted to simulation tests.

The algorithms presented in Refs. 26 and 27 as well as in this paper can be treated as a development of the sparse phase and amplitude reconstruction (SPAR) algorithm^{28} and the maximum likelihood absolute phase reconstruction for multiwavelength observations.^{18} The paper is organized as follows. The multiwavelength object and image formation modeling as well as the Poissonian noise observations are discussed in Sec. 2. The development of the algorithm for the WD optical setup is a topic of Sec. 3. Simulation tests and results are discussed in Sec. 4.

## 2.

## Image Formation Model

The notation and the image formation model presented in this section follow those in our paper.^{26} We present these results here for completeness of the presentation and in order to make clearer relations of the problems considered in Ref. 26 and in this paper.

## 2.1.

### Multiwavelength Object and Image Modeling

Let ${h}_{o}(x)$, $x\in {R}^{2}$, be a profile of the transparent 2-D object to be reconstructed. A coherent multiwavelength light beam generates corresponding complex-valued wavefronts ${u}_{o,\lambda}={b}_{o,\lambda}\mathrm{exp}(j{\phi}_{o,\lambda})$, $\lambda \in \mathrm{\Lambda}$, where $\mathrm{\Lambda}$ is a set of the wavelengths and ${\phi}_{o,\lambda}=\frac{2\pi}{\lambda}{h}_{o}({n}_{\lambda}-{n}_{\mathrm{env}})$ are phase delays corresponding to ${h}_{o}$, where ${n}_{\lambda}$ is the refractive index of the object and ${n}_{\mathrm{env}}$ is the refractive index of the environment, for the air ${n}_{\mathrm{env}}=1$, what is assumed in our tests.

In the lensless phase retrieval, the problem at hand is a reconstruction of the profile ${h}_{o}(x)$ from the noisy observations of diffractive patterns, registered at some distance from the object. For the high magnitude variation of ${h}_{o}$, the corresponding absolute phases ${\phi}_{o,\lambda}$ take values beyond the basic phase interval $[-\pi ,\pi ]$, then only wrapped phases can be obtained from ${u}_{o,\lambda}$ for separate $\lambda $.

In what follows, we apply dimensionless relative frequencies ${\mu}_{\lambda}=\frac{{\lambda}^{\prime}({n}_{\lambda}-1)}{\lambda ({n}_{{\lambda}^{\prime}}-1)}$, which replace common notation for object wavefront as

## Eq. (1)

$${u}_{o,\lambda}={b}_{o,\lambda}\mathrm{exp}(j{\mu}_{\lambda}{\phi}_{o}),\lambda \in \mathrm{\Lambda},$$Here ${\lambda}^{\prime}\in \mathrm{\Lambda}$ is a reference wavelength and ${\phi}_{o}$ is an absolute phase corresponding to this wavelength. The parameter ${\mu}_{\lambda}$ establishes a link between the absolute phase ${\phi}_{o}$ and wrapped phase ${\psi}_{o,\lambda}$ of ${u}_{o,\lambda}$, which can be measured at the $\lambda $-channel. The wrapped phase is related with the true absolute phase, ${\phi}_{o}$, as ${\mu}_{\lambda}{\phi}_{o}={\psi}_{o,\lambda}+2\pi {k}_{\lambda}$, where ${k}_{\lambda}$ is an integer, ${\psi}_{o,\lambda}\in [-\pi ,\pi )$. The link between the absolute and wrapped phase conventionally is installed by the wrapping operator $\mathcal{W}(\xb7)$ as follows:

## Eq. (2)

$${\psi}_{o,\lambda}=\mathcal{W}({\mu}_{\lambda}{\phi}_{o})\equiv \mathrm{mod}({\mu}_{\lambda}{\phi}_{o}+\pi ,2\pi )-\pi .$$The image and observation modeling are defined as

## Eq. (5)

$${z}_{s,\lambda}=\mathcal{G}\{{y}_{s,\lambda}\},\phantom{\rule[-0.0ex]{1em}{0.0ex}}s=1,\dots ,S,\lambda \in \mathrm{\Lambda},$$The considered multiwavelength phase retrieval problem consists of reconstruction of ${\phi}_{o}$ and ${b}_{o,\lambda}$ from the observation ${z}_{s,\lambda}$ provided that ${\mu}_{\lambda}$, ${\mathcal{P}}_{s,\lambda ,d}\{\}$, and $\mathcal{G}\{\}$ are known. If the phases ${\mu}_{\lambda}{\phi}_{o}\in [\pi ,-\pi )$, the problem is trivial as estimates of ${\mu}_{\lambda}{\phi}_{o}$ and ${b}_{o,\lambda}$ can be found processing data separately for each $\lambda $ by any phase retrieval algorithm applicable for complex-valued object, in particular by the SPAR algorithm.^{29} However, if the phases ${\mu}_{\lambda}{\phi}_{o}$ go beyond the range $[\pi ,-\pi )$, the problem becomes nontrivial and challenging since a phase unwrapping must be embedded in the multiwavelength phase retrieval.

We are focused on the reconstruction of the absolute phase ${\phi}_{o}$ from a set of single wavelength experiments produced in a successive manner, which are registered by a wide-band gray-scale CCD sensor, instead of using all wavelengths simultaneously with CFA and demosaicing processing as it is done in the WM algorithm derived in Ref. 26.

The proposed WD algorithm demonstrates a faster convergence than the WM algorithm,^{26} but makes the experimental realization more complex as it requires separate experiments and measurements for each wavelength. A single wavelength is used for each experiment and the obtained observations $\{{z}_{s,\lambda}\}$ are used jointly for reconstruction of ${\phi}_{o}(x)$ and ${h}_{o}(x)$.

Note also that the proposed approach assumes that the refractive indexes of the object are known for the considered set of the wavelengths. Only in this case, the problem of the absolute phase retrieval and contouring can be resolved.

## 2.2.

### Noisy Observation

For noise simulation, we use independent Poisson random variables since a measurement process in optics amounts to count the photons hitting the sensor’s elements (e.g., Ref. 30). The presented form of the noisy observations corresponds to Refs. 26 and 27.

The probability that a random Poissonian variable ${z}_{s,\lambda}(x)$ of the mean value ${y}_{s,\lambda}(x)$ takes a given non-negative integer value $k$, is given as

## Eq. (6)

$$p[{z}_{s,\lambda}(x)=k]=\mathrm{exp}[-{y}_{s,\lambda}(x)\chi ]\frac{{[{y}_{s,\lambda}(x)\chi ]}^{k}}{k!},$$Another useful noise characteristic is the mean value of photons per pixel. It is defined as ${N}_{\text{photon},\lambda}=\sum _{x}{z}_{s,\lambda}(x)/{N}_{\text{sensor}}$, where ${N}_{\text{sensor}}$ is the number of sensor pixels. As in case of SNR, smaller values of $\chi $ lead to smaller ${N}_{\mathrm{photon}}$, i.e., to noisier observations ${z}_{s,\lambda}(x)$.

## 3.

## Development of Algorithm

We consider the problem of the absolute phase retrieval as an estimation of the object wavefront ${u}_{o}\in {\mathbb{C}}^{n}$ from noisy observations $\{{\mathbf{z}}_{s,\lambda}\}$. The problem is challenging mainly due to the periodic nature of the likelihood function with respect to the phase ${\phi}_{o}$ and the nonlinearity of the observation model.

Following Ref. 26 and using the same notation, the maximum likelihood function leads to the criterion function

## Eq. (7)

$$\mathcal{L}({u}_{s,\lambda},{u}_{o,\lambda},{b}_{o},{\phi}_{o},{\delta}_{\lambda})=\sum _{\lambda ,s,x}l[{z}_{s,\lambda}(x),|{u}_{s,\lambda}(x){|}^{2}]+\phantom{\rule{0ex}{0ex}}+\frac{1}{{\gamma}_{1}}\sum _{\lambda ,s}{\Vert {u}_{s,\lambda}-{\mathcal{P}}_{s,\lambda ,d}\{{u}_{o,\lambda}\}\Vert}_{2}^{2}+\frac{1}{{\gamma}_{2}}\sum _{\lambda}{\Vert {b}_{o.\lambda}\mathrm{exp}[j({\mu}_{\lambda}\xb7{\phi}_{o}+{\delta}_{\lambda})]-{u}_{o,\lambda}\Vert}_{2}^{2},$$Here we introduce invariant additive errors ${\delta}_{\lambda}$, which model phase errors for different wavelengths that occur due to uncertainty in the phase retrieval problem with respect to invariant errors in the phase appearing for each wavelength. The criterion Eq. (7) is minimized with respect to ${u}_{s,\lambda}$, ${u}_{o,\lambda}$, ${b}_{o}$, and ${\phi}_{o}$ as well as with respect to the phase-shifts ${\delta}_{\lambda}$.

The algorithm is composed from the following steps.

1. Minimization with respect to ${u}_{s,\lambda}$ concerns the first two summands in $\mathcal{L}$. In contrast with minimization for WM in Ref. 26, where the observations are recorded simultaneously for all wavelengths, the step is simplified due to separate processing for each wavelength. The problem is additive on $\lambda $ and $x$ and, respectively, can be obtained separately for each $\lambda $ and each $x$. The corresponding analytical solution is obtained in Ref. 28. This solution defines ${u}_{s,\lambda}$ as functions of the noisy observation ${z}_{s,\lambda}$ and projection ${\mathcal{P}}_{s,\lambda ,d}\{{u}_{o,\lambda}\}$ of ${u}_{o,\lambda}$ on the sensor. The amplitudes of ${u}_{s,\lambda}$ are updated accordingly to given observations and the phases are preserved.

2. Minimization with respect to ${u}_{o,\lambda}$ goes to the last two summands of the criterion. It is a quadratic problem with the solution of the form

where $I$ stands for the identity operator.## Eq. (8)

$${\widehat{u}}_{o,\lambda}={(\sum _{s}{\mathcal{P}}_{s,\lambda ,d}^{*}{\mathcal{P}}_{s,\lambda ,d}+I{\gamma}_{1}/{\gamma}_{2})}^{-1}\times [\sum _{s}{\mathcal{P}}_{s,\lambda ,d}^{*}{u}_{s,\lambda}+{\gamma}_{1}/{\gamma}_{2}{b}_{o,\lambda}\mathrm{exp}(j{\mu}_{\lambda}{\phi}_{o})],$$Here the operator ${\mathcal{P}}_{s,\lambda ,d}^{*}$ is Hermitian adjoint for ${\mathcal{P}}_{s,\lambda ,d}$. If ${\mathcal{P}}_{s,\lambda ,d}$ are orthonormal such that ${\sum}_{s}{\mathcal{P}}_{s,\lambda}^{*}{\mathcal{P}}_{s,\lambda ,d}$ is the identity operator, ${\sum}_{s}{\mathcal{P}}_{s,\lambda ,d}^{*}{\mathcal{P}}_{s,\lambda ,d}=I$, then the solution is simplified to the form

3. Minimization on ${b}_{o,\lambda}$, ${\phi}_{o}$, and ${\delta}_{\lambda}$ (the last summand in the criterion) is the nonlinear least square fitting of the wavelength dependent ${u}_{o,\lambda}$ by the object phase ${\phi}_{o}$ invariant with respect $\lambda $, amplitudes ${b}_{o,\lambda}$, and spatially invariant phase-shifts ${\delta}_{\lambda}$. The criterion for this problem can be given in the equivalent form as

where ${\psi}_{o,\lambda}=\text{angle}({u}_{o,\lambda})$, i.e., the wrapped phase of ${u}_{o,\lambda}$. In this representation, the phase shifts ${\delta}_{\lambda}$ are addressed to the wrapped phases ${\psi}_{o,\lambda}$ in order to stress that the complex exponents $\mathrm{exp}[j({\psi}_{o,\lambda})]$ can be out-of-phase with $\mathrm{exp}[j({\mu}_{\lambda}\xb7{\phi}_{o})]$ and the variables ${\delta}_{\lambda}$ serve in order to compensation this phase difference and make the phase modeling of the object by $\mathrm{exp}[j({\mu}_{\lambda}\xb7{\phi}_{o})]$ corresponding to the complex exponent $\mathrm{exp}[j({\psi}_{o,\lambda})]$.## Eq. (10)

$${\mathcal{L}}_{1}({b}_{o,\lambda},\phi \u200a,{\delta}_{\lambda})=\sum _{\lambda}{\Vert {b}_{o.\lambda}\mathrm{exp}[j({\mu}_{\lambda}\xb7{\phi}_{o})]-|{u}_{o,\lambda}|\mathrm{exp}[j({\psi}_{o,\lambda}-{\delta}_{\lambda})]\Vert}_{2}^{2},$$

Minimization of ${\mathcal{L}}_{1}$, with respect to ${\phi}_{o}$ and the phase-shifts ${\delta}_{\lambda}$, is implemented as minimization on the grids of these variables. The derivation of this algorithm and its details can be seen in Ref. 26, where it was done for the WM optical setup. This algorithm was named absolute phase reconstruction, and it is shown in step 4 of Table 1 with the abbreviation APR.

## Table 1

WD phase retrieval algorithm.

Input: $\{{z}_{s,\lambda}\},s=1,\dots ,S$, $\lambda \in \mathrm{\Lambda}$ | |

Initialization: ${u}_{o,\lambda}^{1}$, $\lambda \in \mathrm{\Lambda}$ | |

Main iterations: $t=\mathrm{1,2},\dots ,T$ | |

1. | Forward propagation: ${u}_{s,\lambda}^{t}={\mathcal{P}}_{s}{u}_{o,\lambda}^{t},s=1,\dots ,S$, $\lambda \in \mathrm{\Lambda}$ |

2. | Noise suppression and update of ${u}_{s,\lambda}^{t}$: ${u}_{s,\lambda}^{t+1/2}={\mathrm{\varphi}}_{1}({u}_{s,\lambda}^{t},{z}_{s,\lambda})$ |

3. | Backward propagation and filtering: ${u}_{o,\lambda}^{t+1/2}={\mathrm{\varphi}}_{2}({u}_{s}^{t+1/2})$, ${u}_{o,\lambda}^{t+1}=\mathrm{BM}3\mathrm{D}({u}_{o,\lambda}^{t+1/2})$ |

4. | Absolute phase retrieval and filtering: ${\phi}_{o}^{t+1/2}=\mathcal{APR}({u}_{o,\lambda}^{t+1})$, ${\phi}_{o}^{t+1}=\mathrm{BM}3\mathrm{D}({\phi}_{o}^{t+1/2})$ |

5. | Object wavefront update: ${u}_{o,\lambda}^{t+1}=|{u}_{o,\lambda}^{t+1}|\mathrm{exp}(j{\phi}_{o}^{t+1}{\mu}_{\lambda})$, $\lambda \in \mathrm{\Lambda}$ |

Output: ${\phi}_{o}^{T+1}$, ${u}_{o,\lambda}^{T+1}$ |

## 3.1.

### Algorithm’s Implementation

Using the above solutions, the iterative algorithm is developed of the structure shown in Table 1. The initialization by the complex-valued ${u}_{o,\lambda}^{1}$ is obtained from the observations $\{{z}_{s,\lambda}\}$ by the SPAR algorithm^{28} separately for each wavelength. The main iterations start from the forward propagation (step 1) and follow by the amplitude update for ${u}_{s,\lambda}^{t}$ at step 2. The operator ${\mathrm{\varphi}}_{1}$ derived in Ref. 28 is defined as ${u}_{s,\lambda}^{t+1/2}={\mathrm{\varphi}}_{1}({u}_{s,\lambda}^{t},{z}_{s,\lambda})$, that means ${u}_{s,\lambda}^{t+1/2}=w\xb7\frac{{u}_{s,\lambda}^{t}}{|{u}_{s,\lambda}^{t}|}$. Here the ratio $\frac{{u}_{s,\lambda}^{t}}{|{u}_{s,\lambda}^{t}|}$ denotes that the variables ${u}_{s,\lambda}^{t}$ and ${u}_{s,\lambda}^{t+1/2}$ have identical phases. The amplitude $w$ is calculated as

## Eq. (11)

$$w=\frac{|{u}_{s,\lambda}^{t}|+\sqrt{{|{u}_{s,\lambda}^{t}|}^{2}+4{z}_{s,\lambda}{\gamma}_{1}(1+{\gamma}_{1}\chi )}}{2(1+{\gamma}_{1}\chi )}.$$The backpropagation is realized in step 3, and the operator ${\mathrm{\varphi}}_{2}$ is defined by Eq. (9). The absolute phase reconstruction from the wrapped phases of ${u}_{o,\lambda}^{t+1}$ is produced in step 4 by the APR algorithm.^{26} The obtained amplitude and phase update ${u}_{o,\lambda}^{t+1}$ at step 5.

The steps 3 and 4 are completed by the block-matching 3-D (BM3D) filtering.^{29} In step 3, it is the filtering of complex-valued ${u}_{o,\lambda}^{t+1/2}$ produced separately for the wrapped phase and amplitude of ${u}_{o,\lambda}^{t+1/2}$. In step 4, this filtering is applied to the absolute phase ${\phi}_{o}^{t+1/2}$. These BM3D filters are derived from the groupwise sparsity priors for the filtered variables. This technique is based on the Nash equilibrium formulation for the phase retrieval instead of the more conventional constrained optimization with a single criterion function as it is in Eq. (7). We do not show here this derivation as it is quite similar to that developed in Ref. 28. The sparsity rationale assumes that there is a transform of image/signal such that it can be represented with a small number of transform coefficients or in a bit different terms with a small number of basic functions.^{31} This idea is confirmed and supported by the great success of many sparsity-based techniques developed for image/signal processing problems. Within the sparse theory, a family of the BM3D algorithms has been developed where both ideas of grouping similar patches and the transform design are taken into consideration. This type of the algorithms proposed initially for image denoising^{29} being modified for various problems demonstrates the state-of-the-art performance.^{32} The details of BM3D as an advanced image filter can be seen in Ref. 29.

In step 3 of the proposed algorithm, ${u}_{o,\lambda}^{t+1}=\mathrm{BM}3\mathrm{D}({u}_{o,\lambda}^{t+1/2})$, the BM3D is applied to the complex-valued variables ${u}_{o,\lambda}^{t+1/2}$. It is implemented in this paper as independent filtering of amplitude and wrapped phase

## Eq. (12)

$$|{u}_{o,\lambda}^{t+1}|={\mathrm{BM}3\mathrm{D}}_{\mathrm{ampl}}(|{u}_{o,\lambda}^{t+1/2}|),\phantom{\rule{0ex}{0ex}}{\psi}_{o,\lambda}^{t+1}={\mathrm{BM}3\mathrm{D}}_{\mathrm{phase}}({\psi}_{o,\lambda}^{t+1/2}),$$In step 4 of the proposed algorithm, ${\phi}_{o}^{t+1}=\mathrm{BM}3\mathrm{D}({\phi}_{o}^{t+1/2})$, the BM3D is applied for filtering of the real-valued variable ${\phi}_{o}$. In presented experiments, the parameters of the algorithm are fixed for all tests: ${\gamma}_{1}=1/\chi $, where $\chi $ is the parameter of the Poissonian distribution, ${\gamma}_{1}/{\gamma}_{2}=0.2$. The parameters of BM3D filters can be seen in Ref. 28.

The algorithm presented in Table 1 differs from the WM algorithm^{26} by steps 2 and 3. The main iterations in Ref. 26 also start from the forward propagation and follow by the amplitude update and the interpolation for the CFA sensor pixels where there are no observations (step 2). In the WD algorithm, there are no updates and interpolation. In step 3 in Ref. 26, the wavefront pixels not given in observations preserve values obtained in step 2 and used for backpropagation. This manipulation with wavefront reconstruction for RGB pixel is not given in observations from the principal difference with the WD algorithm, presented here in Table 1. The other steps of the algorithms for WM and WD, as it is in this paper, concerning filtering and forward- and backward propagation are identical.

Comparing the presented WD algorithm developed for the WD setup with that developed in Ref. 26 for the WM setup, we may note that the faster convergence and better accuracy are obtained for WD, but it requires a larger number of observations, because each experiment gives the data of a single wavelength only, while in WM we obtain the observations for all wavelength simultaneously.

## 4.

## Simulation Tests

## 4.1.

### Proposed Setup

In simulation, we model a lensless optical system (Fig. 1), where a thin transparent phase object is illuminated by monochromatic three color (RGB) coherent light beams from lasers or LEDs. The wavelengths are $\mathrm{\Lambda}=[417,\hspace{0.17em}532,\hspace{0.17em}633]\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{nm}$, with the corresponding refractive indexes [1.528, 1.519, 1.515] as taken for BK7 optical glass. The reference wavelength ${\lambda}^{\prime}=417\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{nm}$ then the relative frequencies take values ${\mu}_{\lambda}=[1,\hspace{0.17em}0.7705,\hspace{0.17em}0.6425]$. The pixel sizes of the CMOS camera and SLM are 1.4 and $5.6\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}$, respectively. The distance $d$ between the object and CMOS camera is equal to 5 mm.

The free propagation of the wavefronts to the sensor is given by the Rayleigh–Sommerfeld model with the transfer function defined through the angular spectrum (AS).^{33} For the proper numerical AS propagation without aliasing effects, the zero-padding of the object must be applied. Its size is obtained from the inequality^{34}

The intensities of the light beams registered on the sensor are calculated as ${z}_{s,\lambda}=\mathcal{G}\{|\mathcal{A}{\mathcal{S}}_{\lambda ,d}\{{M}_{s}\circ {u}_{o,\lambda}\}{|}^{2}\}$, $s=1,\dots ,S$, $\lambda \subset \mathrm{\Lambda}$. Here $\mathcal{A}{\mathcal{S}}_{\lambda ,d}$ denotes the AS propagation operator and ${M}_{s}$ are the modulation phase masks inserted before the object and pixelated as the object, “∘” stands for the pixelwise multiplication of the object and phase masks. These phase masks enable strong diffraction of the wave-field and are introduced in order to achieve the phase diversity sufficient for reconstruction of the complex-valued object from intensity measurements. As it was described in Ref. 33, we use the Gaussian random phase masks. Thus, in our simulations the propagation operator ${\mathcal{P}}_{s,\lambda ,d}$ in Eq. (3) is implemented as a combination of the AS propagation $\mathcal{A}{\mathcal{S}}_{\lambda ,d}$ and the modulation phase mask ${M}_{s}$.

## 4.2.

### Reconstruction Results

The illustrating reconstructions are presented for two phase objects with the invariant amplitudes equal to 1 and the phases: Gaussian distribution ($100\times 100$) and US Air Force (USAF) resolution test-target ($64\times 64$). The absolute and wrapped phases of these test-objects are shown in Fig. 2. The Gaussian and USAF phases are very different, the first one has a smooth continuous shape while the second one is discontinuous binary. Both phases are taken with the high peak-value equal to $30\pi $ rad, what corresponds to about 30 reference wavelengths ${\lambda}^{\prime}$ in variations of the profile ${h}_{o}$.

As a result of this high peak-value of the absolute phases, the corresponding wrapped phases are very complex with fringes overlapping for the Gaussian phase and lack of height information in the USAF test-phase. Due to this complexity, the unwrapping is not possible using single frequency wrapped phases only. We show that the proposed multiwavelength WD algorithm is quite successful and is able to reconstruct the absolute phase even from very noisy data.

We demonstrate the performance of the WD algorithm for the very difficult scenarios with very noisy Poissonian observations. The noisiness of observations is characterized by SNR and by the mean number of photons per sensor pixel, ${N}_{\text{photon}}$.

The accuracy of the object reconstruction is characterized by the relative root-mean-square error (RRMSE) criteria calculated as RMSE divided by the root of the mean square power of the signal

## Eq. (14)

$${\mathrm{RRMSE}}_{\phi}=\frac{\sqrt{{\Vert {\widehat{\phi}}_{\mathrm{est}}-{\phi}_{\mathrm{true}}\Vert}_{2}^{2}}}{\sqrt{{\Vert {\phi}_{\mathrm{true}}\Vert}_{2}^{2}}}.$$Figure 3 shows the performance of the WD algorithm with respect to different noise levels characterized by the parameter $\chi $. The RRMSE curves for Gaussian and USAF phases demonstrate a similar behavior and go down for growing $\chi $ numbers, but RRMSE curve for the USAF object goes down more sharply and takes values smaller than 0.1 at $\chi =20$, while RRMSE curve for Gaussian phase takes a close value only at $\chi =50$. Nevertheless, in both cases, the reconstructions are nearly perfect even for very noisy observed data with SNR values as low as 3.8 and 6.5 dB and very small photon numbers ${N}_{\mathrm{photon}}=0.75$ and 1.87, respectively.

RRMSEs for the Gaussian phase are shown in Fig. 4 as functions of the experiments number $S$ and SNR. Number of experiments $S$ is in range of 1 to 10, and $\mathrm{SNR}$ from 1 (noisiest case) to 25 dB (noiseless case). Nearly horizontal (dark blue) areas correspond to high-accuracy reconstructions with small values of RRMSE, for other areas RRMSE values are much higher and the accuracy is not so good. For 50 iterations (left image), the high accuracy can be achieved starting from $S=8$ even for very noisy data with $\mathrm{SNR}=4$. For the number of experiments smaller than $S=4$, the algorithm fails to converge even for low noise levels due to a not sufficient diversity of the observed diffraction patters. The smallest number of experiments that can be used for reconstruction is $S=4$. However, even for $S=4$ good results can be obtained only after 100 iterations (right image). It follows from Fig. 4 that some improvement in the accuracy can be achieved at the price of the larger number of experiments $S$ and the larger number of iterations.

In comparison with the WM algorithm^{26} such small values for RRMSE can be obtained only after a larger number of iterations, about $T=300$, mainly because of 75%, 50%, and 75% of CFA pixels lacking intensity measurements for red, green, and blue wavelengths, respectively. This information is recovered by the WM algorithm but at the price of the larger number of iterations.

In what follows, we provide the images of the reconstructed absolute phase obtained for $S=4$ and the iteration number $T=100$.

Figures 5 and 6 show absolute phase reconstructions (3-D/2-D images) obtained by the WD algorithm and by the SPAR algorithm reconstructing the wrapped phases for the separate wavelengths following by the phase unwrapping by the PUMA phase unwrapping algorithm.^{18} The conditions of the experiments are: $\mathrm{SNR}=6.5\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{dB}$, ${N}_{\text{photon}}=1.87$ for the Gaussian object and $\mathrm{SNR}=3.8\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{dB}$, ${N}_{\text{photon}}=0.75$ for the USAF object. The WD algorithm demonstrates a strong ability to reconstruct the absolute phases while the single wavelength-based approach completely failed. The accuracy of the WD reconstruction is very high despite a high level of the noise.

The wrapped phase pattern for such high peak-value of the object phases is very complex and irregular (see Fig. 2) so that it is not possible to unwrap it by modern 2-D unwrapping algorithms, but the proposed algorithm is able to resolve the problem even for such complex case. Especially it is challenging task to reconstruct objects such as USAF with big differences between adjacent pixels exceeding $2\pi $, but the WD algorithm successfully reconstructs both objects with high reconstruction quality.

For simulations, we use MATLAB R2016b on a computer with 32 GB of RAM and CPU with a 3.40 GHz Intel^{®} Core^{TM} i7-3770 processor. The computation complexity of the algorithm is characterized by the time required for processing. For 1 iteration, $S=4$, and $100\times 100\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$ images, zero-padded to $1700\times 1700$, this time equals to 12 s.

## 5.

## Conclusion

The multiwavelength absolute phase retrieval and surface contouring from noisy intensity observations is considered. The observations are recorded by a broadband monochromatic sensor in the successive manner for each wavelength in the WD optical setup. The maximum likelihood criterion used in the developed variational derivation of the phase retrieval algorithm defines the general intention to reach statistically optimal estimates. The phase retrieval is an ill-posed inverse problem where the observation noise is amplified and transferred to phase and amplitude as variables of optimization. The sparse modeling enables a regularization of this inverse problem and efficient suppression of these random errors by BM3D filtering of phase and amplitude. The efficiency of the developed algorithm is demonstrated by simulation tests for the coded diffraction pattern WD scenario.

## Disclosures

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

## Acknowledgments

This work was supported by the Academy of Finland, Project No. 287150, 2015-2019, Russian Ministry of Education and Science (project within the state mission for institutions of higher education, agreement 3.1893.2017/4.6) and Horizon 2020 TWINN-2015, Grant No. 687328 - HOLO.

## References

## Biography

**Vladimir Katkovnik** is a senior research fellow of Tampere University of Technology. His research interests include stochastic signal processing, linear and nonlinear filtering, nonparametric estimation, imaging, nonstationary systems, and time-frequency analysis. He published six books and over 300 papers. He is a member of SPIE.

**Igor Shevkunov** received his PhD in 2013 and is the author of more than 30 papers. His main research interests are digital holography, interferometry, atomic and molecular physics, and lasers.

**Nikolay V. Petrov** is a senior researcher and associate professor in the Department of Photonics and Optoinformatics, ITMO University, Russia. He received his PhD in 2011 and is the author of more than 50 papers. His main research interests are holography, interferometry, femtosecond optics, and terahertz imaging.