|
1.IntroductionWe consider lensless diffraction imaging where information on the object phase and amplitude in question is given by intensities of phase-coded diffraction patterns. The problem of the complex-valued object reconstruction from intensity measurements is well known as the phase retrieval, where “phase” emphasizes that this missing variable missing in observations defines the principle difficulties of the problem in particular, with respect to holography and interferometry, where reference beams allow to preserve phase information. The phase retrieval is from the class of the challenging illposed problems especially difficult for noisy observations. The phase/observation diversity sufficient for reliable reconstruction of the object phase from the intensity patterns is a crucial moment of the problem. Defocusing and phase modulation are known as effective instruments for solving the problem (e.g., Refs. 12.3.–4). Diffraction optical elements are used to design the diffraction phase imaging systems.5 The Gerchberg–Saxton (GS) algorithms6,7 are from the most popular in the field of phase retrieval. These iterative algorithms originally proposed the noiseless data use, alternating projection between the complex-valued object and complex-valued wavefronts at the sensor plane. In these iterations, the amplitudes in are replaced by square roots of the corresponding items of the measurements. The backprojection of the to the object is modified according to the prior information on the object, e.g., support size and shape, amplitude value, etc. The GS algorithms exist in various modifications.8,9 The review and analysis of this type of the algorithms as well as further developments can be found in Ref. 10. Many publications concern variational formulations for the GS-type algorithms. In particular, the links between the conventional GS and variational techniques are studied in Refs. 1112.–13. Contrary to the intuitively clear heuristic of the GS algorithms, the variational approaches usually have a strong mathematical background including image formation modeling, formulation of the objective function (criterion), and finally going to numerical techniques solving corresponding optimization tasks. In recent years, there has been an increase in demand for multispectral imaging. Multispectral information by recording object waves with multiple wavelengths that are irradiated from multiwavelength/color light sources helps to analyze and recognize objects, to clarify color and tissue distributions of an object and dramatically improve the quality of imaging. Multiwavelength digital holography has an enhanced ability for three-dimensional (3-D) wide-range shape measurements using multiwavelength phase unwrapping, due to the recording of quantitative phase information with multiple wavelengths. In phase retrieval, the multiwavelength is an effective instrument enabling good phase/data diversity. The multiwavelength phase retrieval is much less studied as compared with the standard single-wavelength formulation. These works by the principle of measurements can be separated into two groups. In the first one, the absolute phase is estimated from phase measurements obtained in some or another way. This scenario is typical for interferometry/holography where reference beams are applied to reveal the phase information, e.g., Refs. 1415.16.17.–17. The phase unwrapping algorithms for two-dimensional (2-D) images with simultaneous processing of multiple noisy complex-exponent observations have been developed based on the maximum-likelihood techniques.18 Another group of the techniques uses amplitudes or intensities (powers) as measurements. These formulations are from the class of the multiwavelength phase retrieval problems, e.g., Refs. 1920.21.–22. The Chinese Remainder Theorems provide a class of the methods with a good theoretical background.23 The reformulation of these approaches for more practical scenarios with noisy data and for robust estimation leads to the techniques similar to various forms of the maximum likelihood.24,25 An approach to multiwavelength phase retrieval has been proposed in our recent papers.26,27 The phase retrieval in the wavelength multiplexing (WM) setup is studied in the first of these two papers. A broadband light source radiates all wavelengths (RGB in our tests) simultaneously and a CMOS sensor equipped with a color filter array (CFA) registers spectral measurements. In the second paper,27 we consider a wavelength division (WD) setup with successive and separate three RGB wavelength exposures registered by a broadband CCD sensor without CFA. The algorithms for these two setups are quite different, in particular because the WM algorithm requires interpolation of the RGB data subsampled by CFA for pixels where there are no observations of some of the wavebands. The WD algorithm deals with the observations obtained for all pixels for each of the RGB bands separately. This paper is a further development of the WD setup27 with more details concerning the approach and the algorithm as well as the simulation study performed for the complex-valued objects and parameters of the experiments different from those in Ref. 26. The main results of this paper can be summarized as an extension of the conference paper27 with demonstration of the efficiency of the proposed algorithm, which is computationally simpler, much more accurate and faster than the algorithm for WM in Ref. 26. The experimental study of the algorithm is restricted to simulation tests. The algorithms presented in Refs. 26 and 27 as well as in this paper can be treated as a development of the sparse phase and amplitude reconstruction (SPAR) algorithm28 and the maximum likelihood absolute phase reconstruction for multiwavelength observations.18 The paper is organized as follows. The multiwavelength object and image formation modeling as well as the Poissonian noise observations are discussed in Sec. 2. The development of the algorithm for the WD optical setup is a topic of Sec. 3. Simulation tests and results are discussed in Sec. 4. 2.Image Formation ModelThe notation and the image formation model presented in this section follow those in our paper.26 We present these results here for completeness of the presentation and in order to make clearer relations of the problems considered in Ref. 26 and in this paper. 2.1.Multiwavelength Object and Image ModelingLet , , be a profile of the transparent 2-D object to be reconstructed. A coherent multiwavelength light beam generates corresponding complex-valued wavefronts , , where is a set of the wavelengths and are phase delays corresponding to , where is the refractive index of the object and is the refractive index of the environment, for the air , what is assumed in our tests. In the lensless phase retrieval, the problem at hand is a reconstruction of the profile from the noisy observations of diffractive patterns, registered at some distance from the object. For the high magnitude variation of , the corresponding absolute phases take values beyond the basic phase interval , then only wrapped phases can be obtained from for separate . In what follows, we apply dimensionless relative frequencies , which replace common notation for object wavefront as where , is the object absolute phase , and is the set of the wavelengths.Here is a reference wavelength and is an absolute phase corresponding to this wavelength. The parameter establishes a link between the absolute phase and wrapped phase of , which can be measured at the -channel. The wrapped phase is related with the true absolute phase, , as , where is an integer, . The link between the absolute and wrapped phase conventionally is installed by the wrapping operator as follows: decomposes the absolute phase into two parts: the fractional part and the integer part defined as .The image and observation modeling are defined as where is a wavefront propagated to the sensor plane, is an image (diffraction pattern) formation operator, i.e., the propagation operator from the object to the sensor plane, including in particular random phase masks used for wavefront modulation, is a propagation distance, is the intensity of the wavefront at the sensor plane, and are noisy observations as defined by the generator of the random variables corresponding to , and is a number of experiments.The considered multiwavelength phase retrieval problem consists of reconstruction of and from the observation provided that , , and are known. If the phases , the problem is trivial as estimates of and can be found processing data separately for each by any phase retrieval algorithm applicable for complex-valued object, in particular by the SPAR algorithm.29 However, if the phases go beyond the range , the problem becomes nontrivial and challenging since a phase unwrapping must be embedded in the multiwavelength phase retrieval. We are focused on the reconstruction of the absolute phase from a set of single wavelength experiments produced in a successive manner, which are registered by a wide-band gray-scale CCD sensor, instead of using all wavelengths simultaneously with CFA and demosaicing processing as it is done in the WM algorithm derived in Ref. 26. The proposed WD algorithm demonstrates a faster convergence than the WM algorithm,26 but makes the experimental realization more complex as it requires separate experiments and measurements for each wavelength. A single wavelength is used for each experiment and the obtained observations are used jointly for reconstruction of and . Note also that the proposed approach assumes that the refractive indexes of the object are known for the considered set of the wavelengths. Only in this case, the problem of the absolute phase retrieval and contouring can be resolved. 2.2.Noisy ObservationFor noise simulation, we use independent Poisson random variables since a measurement process in optics amounts to count the photons hitting the sensor’s elements (e.g., Ref. 30). The presented form of the noisy observations corresponds to Refs. 26 and 27. The probability that a random Poissonian variable of the mean value takes a given non-negative integer value , is given as where is the intensity of the wavefront at the pixel . The parameter in Eq. (6) can be interpreted as an exposure time and as sensitivity of the sensor with respect to the input radiation. Defining the observation signal-to-noise ratio (SNR) as the ratio between the square of the mean and variance of , we have . Thus, the relative noisiness of observations becomes stronger as and approaches zero when . The latter case corresponds to the noiseless scenario: with the probability equal to 1.Another useful noise characteristic is the mean value of photons per pixel. It is defined as , where is the number of sensor pixels. As in case of SNR, smaller values of lead to smaller , i.e., to noisier observations . 3.Development of AlgorithmWe consider the problem of the absolute phase retrieval as an estimation of the object wavefront from noisy observations . The problem is challenging mainly due to the periodic nature of the likelihood function with respect to the phase and the nonlinearity of the observation model. Following Ref. 26 and using the same notation, the maximum likelihood function leads to the criterion function where are regularization parameters, and stands for the Hadamard norm.Here we introduce invariant additive errors , which model phase errors for different wavelengths that occur due to uncertainty in the phase retrieval problem with respect to invariant errors in the phase appearing for each wavelength. The criterion Eq. (7) is minimized with respect to , , , and as well as with respect to the phase-shifts . The algorithm is composed from the following steps.
Minimization of , with respect to and the phase-shifts , is implemented as minimization on the grids of these variables. The derivation of this algorithm and its details can be seen in Ref. 26, where it was done for the WM optical setup. This algorithm was named absolute phase reconstruction, and it is shown in step 4 of Table 1 with the abbreviation APR. Table 1WD phase retrieval algorithm.
3.1.Algorithm’s ImplementationUsing the above solutions, the iterative algorithm is developed of the structure shown in Table 1. The initialization by the complex-valued is obtained from the observations by the SPAR algorithm28 separately for each wavelength. The main iterations start from the forward propagation (step 1) and follow by the amplitude update for at step 2. The operator derived in Ref. 28 is defined as , that means . Here the ratio denotes that the variables and have identical phases. The amplitude is calculated as The backpropagation is realized in step 3, and the operator is defined by Eq. (9). The absolute phase reconstruction from the wrapped phases of is produced in step 4 by the APR algorithm.26 The obtained amplitude and phase update at step 5. The steps 3 and 4 are completed by the block-matching 3-D (BM3D) filtering.29 In step 3, it is the filtering of complex-valued produced separately for the wrapped phase and amplitude of . In step 4, this filtering is applied to the absolute phase . These BM3D filters are derived from the groupwise sparsity priors for the filtered variables. This technique is based on the Nash equilibrium formulation for the phase retrieval instead of the more conventional constrained optimization with a single criterion function as it is in Eq. (7). We do not show here this derivation as it is quite similar to that developed in Ref. 28. The sparsity rationale assumes that there is a transform of image/signal such that it can be represented with a small number of transform coefficients or in a bit different terms with a small number of basic functions.31 This idea is confirmed and supported by the great success of many sparsity-based techniques developed for image/signal processing problems. Within the sparse theory, a family of the BM3D algorithms has been developed where both ideas of grouping similar patches and the transform design are taken into consideration. This type of the algorithms proposed initially for image denoising29 being modified for various problems demonstrates the state-of-the-art performance.32 The details of BM3D as an advanced image filter can be seen in Ref. 29. In step 3 of the proposed algorithm, , the BM3D is applied to the complex-valued variables . It is implemented in this paper as independent filtering of amplitude and wrapped phase here , thus the updated complex-valued is calculated as .In step 4 of the proposed algorithm, , the BM3D is applied for filtering of the real-valued variable . In presented experiments, the parameters of the algorithm are fixed for all tests: , where is the parameter of the Poissonian distribution, . The parameters of BM3D filters can be seen in Ref. 28. The algorithm presented in Table 1 differs from the WM algorithm26 by steps 2 and 3. The main iterations in Ref. 26 also start from the forward propagation and follow by the amplitude update and the interpolation for the CFA sensor pixels where there are no observations (step 2). In the WD algorithm, there are no updates and interpolation. In step 3 in Ref. 26, the wavefront pixels not given in observations preserve values obtained in step 2 and used for backpropagation. This manipulation with wavefront reconstruction for RGB pixel is not given in observations from the principal difference with the WD algorithm, presented here in Table 1. The other steps of the algorithms for WM and WD, as it is in this paper, concerning filtering and forward- and backward propagation are identical. Comparing the presented WD algorithm developed for the WD setup with that developed in Ref. 26 for the WM setup, we may note that the faster convergence and better accuracy are obtained for WD, but it requires a larger number of observations, because each experiment gives the data of a single wavelength only, while in WM we obtain the observations for all wavelength simultaneously. 4.Simulation Tests4.1.Proposed SetupIn simulation, we model a lensless optical system (Fig. 1), where a thin transparent phase object is illuminated by monochromatic three color (RGB) coherent light beams from lasers or LEDs. The wavelengths are , with the corresponding refractive indexes [1.528, 1.519, 1.515] as taken for BK7 optical glass. The reference wavelength then the relative frequencies take values . The pixel sizes of the CMOS camera and SLM are 1.4 and , respectively. The distance between the object and CMOS camera is equal to 5 mm. The free propagation of the wavefronts to the sensor is given by the Rayleigh–Sommerfeld model with the transfer function defined through the angular spectrum (AS).33 For the proper numerical AS propagation without aliasing effects, the zero-padding of the object must be applied. Its size is obtained from the inequality34 which binds the propagation distance and number of pixels in one-dimension of the zero-padded object for a given values of pixel size of the sensor and . For the distance , , , and object the zero-padded object has a support of .The intensities of the light beams registered on the sensor are calculated as , , . Here denotes the AS propagation operator and are the modulation phase masks inserted before the object and pixelated as the object, “∘” stands for the pixelwise multiplication of the object and phase masks. These phase masks enable strong diffraction of the wave-field and are introduced in order to achieve the phase diversity sufficient for reconstruction of the complex-valued object from intensity measurements. As it was described in Ref. 33, we use the Gaussian random phase masks. Thus, in our simulations the propagation operator in Eq. (3) is implemented as a combination of the AS propagation and the modulation phase mask . 4.2.Reconstruction ResultsThe illustrating reconstructions are presented for two phase objects with the invariant amplitudes equal to 1 and the phases: Gaussian distribution () and US Air Force (USAF) resolution test-target (). The absolute and wrapped phases of these test-objects are shown in Fig. 2. The Gaussian and USAF phases are very different, the first one has a smooth continuous shape while the second one is discontinuous binary. Both phases are taken with the high peak-value equal to rad, what corresponds to about 30 reference wavelengths in variations of the profile . As a result of this high peak-value of the absolute phases, the corresponding wrapped phases are very complex with fringes overlapping for the Gaussian phase and lack of height information in the USAF test-phase. Due to this complexity, the unwrapping is not possible using single frequency wrapped phases only. We show that the proposed multiwavelength WD algorithm is quite successful and is able to reconstruct the absolute phase even from very noisy data. We demonstrate the performance of the WD algorithm for the very difficult scenarios with very noisy Poissonian observations. The noisiness of observations is characterized by SNR and by the mean number of photons per sensor pixel, . The accuracy of the object reconstruction is characterized by the relative root-mean-square error (RRMSE) criteria calculated as RMSE divided by the root of the mean square power of the signal Figure 3 shows the performance of the WD algorithm with respect to different noise levels characterized by the parameter . The RRMSE curves for Gaussian and USAF phases demonstrate a similar behavior and go down for growing numbers, but RRMSE curve for the USAF object goes down more sharply and takes values smaller than 0.1 at , while RRMSE curve for Gaussian phase takes a close value only at . Nevertheless, in both cases, the reconstructions are nearly perfect even for very noisy observed data with SNR values as low as 3.8 and 6.5 dB and very small photon numbers and 1.87, respectively. RRMSEs for the Gaussian phase are shown in Fig. 4 as functions of the experiments number and SNR. Number of experiments is in range of 1 to 10, and from 1 (noisiest case) to 25 dB (noiseless case). Nearly horizontal (dark blue) areas correspond to high-accuracy reconstructions with small values of RRMSE, for other areas RRMSE values are much higher and the accuracy is not so good. For 50 iterations (left image), the high accuracy can be achieved starting from even for very noisy data with . For the number of experiments smaller than , the algorithm fails to converge even for low noise levels due to a not sufficient diversity of the observed diffraction patters. The smallest number of experiments that can be used for reconstruction is . However, even for good results can be obtained only after 100 iterations (right image). It follows from Fig. 4 that some improvement in the accuracy can be achieved at the price of the larger number of experiments and the larger number of iterations. In comparison with the WM algorithm26 such small values for RRMSE can be obtained only after a larger number of iterations, about , mainly because of 75%, 50%, and 75% of CFA pixels lacking intensity measurements for red, green, and blue wavelengths, respectively. This information is recovered by the WM algorithm but at the price of the larger number of iterations. In what follows, we provide the images of the reconstructed absolute phase obtained for and the iteration number . Figures 5 and 6 show absolute phase reconstructions (3-D/2-D images) obtained by the WD algorithm and by the SPAR algorithm reconstructing the wrapped phases for the separate wavelengths following by the phase unwrapping by the PUMA phase unwrapping algorithm.18 The conditions of the experiments are: , for the Gaussian object and , for the USAF object. The WD algorithm demonstrates a strong ability to reconstruct the absolute phases while the single wavelength-based approach completely failed. The accuracy of the WD reconstruction is very high despite a high level of the noise. The wrapped phase pattern for such high peak-value of the object phases is very complex and irregular (see Fig. 2) so that it is not possible to unwrap it by modern 2-D unwrapping algorithms, but the proposed algorithm is able to resolve the problem even for such complex case. Especially it is challenging task to reconstruct objects such as USAF with big differences between adjacent pixels exceeding , but the WD algorithm successfully reconstructs both objects with high reconstruction quality. For simulations, we use MATLAB R2016b on a computer with 32 GB of RAM and CPU with a 3.40 GHz Intel® CoreTM i7-3770 processor. The computation complexity of the algorithm is characterized by the time required for processing. For 1 iteration, , and images, zero-padded to , this time equals to 12 s. 5.ConclusionThe multiwavelength absolute phase retrieval and surface contouring from noisy intensity observations is considered. The observations are recorded by a broadband monochromatic sensor in the successive manner for each wavelength in the WD optical setup. The maximum likelihood criterion used in the developed variational derivation of the phase retrieval algorithm defines the general intention to reach statistically optimal estimates. The phase retrieval is an ill-posed inverse problem where the observation noise is amplified and transferred to phase and amplitude as variables of optimization. The sparse modeling enables a regularization of this inverse problem and efficient suppression of these random errors by BM3D filtering of phase and amplitude. The efficiency of the developed algorithm is demonstrated by simulation tests for the coded diffraction pattern WD scenario. DisclosuresThe authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. AcknowledgmentsThis work was supported by the Academy of Finland, Project No. 287150, 2015-2019, Russian Ministry of Education and Science (project within the state mission for institutions of higher education, agreement 3.1893.2017/4.6) and Horizon 2020 TWINN-2015, Grant No. 687328 - HOLO. ReferencesP. F. Almoro et al.,
“Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,”
Opt. Lasers Eng., 49
(2), 252
–257
(2011). https://doi.org/10.1016/j.optlaseng.2010.09.012 ACOHE9 1063-5203 Google Scholar
E. Candes, X. Li and M. Soltanolkotabi,
“Phase retrieval from coded diffraction patterns,”
Appl. Comput. Harmon. Anal., 39
(2), 277
–299
(2013). https://doi.org/10.1016/j.acha.2014.09.004 Google Scholar
P. F. Almoro et al.,
“Enhanced intensity variation for multiple-plane phase retrieval using a spatial light modulator as a convenient tunable diffuser,”
Opt. Lett., 41
(10), 2161
–2164
(2016). https://doi.org/10.1364/OL.41.002161 OPLEDP 0146-9592 Google Scholar
L. Camacho et al.,
“Quantitative phase microscopy using defocusing by means of a spatial light modulator,”
Opt. Express, 18
(7), 6755
–6766
(2010). https://doi.org/10.1364/OE.18.006755 OPEXFF 1094-4087 Google Scholar
B. C. Kress and P. Meyrueis, Applied Digital Optics: From Micro-Optics to Nanophotonics, John Wiley & Sons Ltd., United Kingdom
(2009). Google Scholar
R. W. Gerchberg and W. O. Saxton,
“A practical algorithm for the determination of phase from image and diffraction plane pictures,”
Phys. E. ppl. Opt. Opt., 2
(352), 237
–246
(1969). Google Scholar
J. R. Fienup,
“Phase retrieval algorithms: a comparison,”
Appl. Opt., 21
(15), 2758
–2769
(1982). https://doi.org/10.1364/AO.21.002758 APOPAI 0003-6935 Google Scholar
I. A. Shevkunov, N. S. Balbekin and N. V. Petrov,
“Comparison of digital holography and iterative phase retrieval methods for wavefront reconstruction,”
Proc. SPIE, 9271 927128
(2014). https://doi.org/10.1117/12.2071469 PSISDG 0277-786X Google Scholar
C. Gaur, B. Mohan and K. Khare,
“Sparsity-assisted solution to the twin image problem in phase retrieval,”
J. Opt. Soc. Am. A, 32
(11), 1922
–1927
(2015). https://doi.org/10.1364/JOSAA.32.001922 JOAOD6 0740-3232 Google Scholar
C. Guo, S. Liu and J. T. Sheridan,
“Iterative phase retrieval algorithms I: optimization,”
Appl. Opt., 54
(15), 4698
–4708
(2015). https://doi.org/10.1364/AO.54.004698 APOPAI 0003-6935 Google Scholar
J. R. Fienup,
“Phase-retrieval algorithms for a complicated optical system,”
Appl. Opt., 32
(10), 1737
–1746
(1993). https://doi.org/10.1364/AO.32.001737 APOPAI 0003-6935 Google Scholar
H. H. Bauschke, P. L. Combettes and D. R. Luke,
“Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization,”
J. Opt. Soc. Am. A. Opt. Image Sci. Vis., 19
(7), 1334
–1345
(2002). https://doi.org/10.1364/JOSAA.19.001334 Google Scholar
E. J. R. Pauwels et al.,
“On Fienup methods for sparse phase retrieval,”
IEEE Trans. Signal Process., 66
(4), 982
–991
(2018). https://doi.org/10.1109/TSP.2017.2780044 ITPRED 1053-587X Google Scholar
C. Falldorf et al.,
“Reduction of speckle noise in multiwavelength contouring,”
Appl. Opt., 51
(34), 8211
–8215
(2012). https://doi.org/10.1364/AO.51.008211 APOPAI 0003-6935 Google Scholar
C. Towers, D. Towers and J. Jones,
“Absolute fringe order calculation using optimised multi-frequency selection in full-field profilometry,”
Opt. Lasers Eng., 43
(7), 788
–800
(2005). https://doi.org/10.1016/j.optlaseng.2004.08.005 Google Scholar
T. Tahara et al.,
“Multiwavelength digital holography and phase-shifting interferometry selectively extracting wavelength information: phase-division multiplexing (PDM) of wavelengths,”
Holographic Materials and Optical Systems, Ch. 09 IntechOpen Limited, United Kingdom
(2017). Google Scholar
D. Claus, G. Pedrini and W. Osten,
“Iterative phase retrieval based on variable wavefront curvature,”
Appl. Opt., 56
(13), F134
–F137
(2017). https://doi.org/10.1364/AO.56.00F134 APOPAI 0003-6935 Google Scholar
J. Bioucas-Dias and G. Valadão,
“Multifrequency absolute phase estimation via graph cuts,”
in European Signal Processing Conf.,
1389
–1393
(2009). Google Scholar
P. Bao et al.,
“Phase retrieval using multiple illumination wavelengths,”
Opt. Lett., 33
(4), 309
–311
(2008). https://doi.org/10.1364/OL.33.000309 OPLEDP 0146-9592 Google Scholar
N. V. Petrov, V. G. Bespalov and A. A. Gorodetsky,
“Phase retrieval method for multiple wavelength speckle patterns,”
Proc. SPIE, 7387 73871T
(2010). https://doi.org/10.1117/12.871433 PSISDG 0277-786X Google Scholar
N. V. Petrov et al.,
“Image reconstruction using measurements in volume speckle fields formed by different wavelengths,”
Proc. SPIE, 7907 790718
(2011). https://doi.org/10.1117/12.876151 PSISDG 0277-786X Google Scholar
M. Sanz et al.,
“Improved quantitative phase imaging in lensless microscopy by single-shot multi-wavelength illumination using a fast convergence algorithm,”
Opt. Express, 23
(16), 21352
–21365
(2015). https://doi.org/10.1364/OE.23.021352 OPEXFF 1094-4087 Google Scholar
K. Falaggis, D. P. Towers and C. E. Towers,
“Algebraic solution for phase unwrapping problems in multiwavelength interferometry,”
Appl. Opt., 53
(17), 3737
–3747
(2014). https://doi.org/10.1364/AO.53.003737 APOPAI 0003-6935 Google Scholar
W. Wang et al.,
“Maximum likelihood estimation based robust Chinese remainder theorem for real numbers and its fast algorithm,”
IEEE Trans. Signal Process., 63
(13), 3317
–3331
(2015). https://doi.org/10.1109/TSP.2015.2413378 ITPRED 1053-587X Google Scholar
L. Xiao, X.-G. Xia and H. Huo,
“Towards robustness in residue number systems,”
IEEE Trans. Signal Process., 65
(6), 1497
–1510
(2017). https://doi.org/10.1109/TSP.2016.2641398 ITPRED 1053-587X Google Scholar
V. Katkovnik et al.,
“Multiwavelength absolute phase retrieval from noisy diffractive patterns: wavelength multiplexing algorithm,”
Appl. Sci., 8
(5), 719
–738
(2018). https://doi.org/10.3390/app8050719 Google Scholar
V. Y. Katkovnik et al.,
“Multiwavelength surface contouring from phase-coded diffraction patterns,”
Proc. SPIE, 10677 106771B
(2018). https://doi.org/10.1117/12.2306127 PSISDG 0277-786X Google Scholar
V. Katkovnik,
“Phase retrieval from noisy data based on sparse approximation of object phase and amplitude,”
(2017). Google Scholar
K. Dabov et al.,
“Image denoising by sparse 3-D transform-domain collaborative filtering,”
IEEE Trans. Image Process., 16
(8), 2080
–2095
(2007). https://doi.org/10.1109/TIP.2007.901238 IIPRE4 1057-7149 Google Scholar
F. Soulez et al.,
“Proximity operators for phase retrieval,”
Appl. Opt., 55
(26), 7412
–7421
(2016). https://doi.org/10.1364/AO.55.007412 APOPAI 0003-6935 Google Scholar
S. G. Mallat,
“A theory for multiresolution signal decomposition: the wavelet representation,”
IEEE Trans. Pattern Anal. Mach. Intell., 11
(7), 674
–693
(1989). https://doi.org/10.1109/34.192463 ITPIDJ 0162-8828 Google Scholar
C. A. Metzler, A. Maleki and R. G. Baraniuk,
“BM3D-PRGAMP: compressive phase retrieval based on BM3D denoising,”
in IEEE Int. Conf. on Image Processing,
2504
–2508
(2016). Google Scholar
V. Katkovnik et al.,
“Computational super-resolution phase retrieval from multiple phase-coded diffraction patterns: simulation study and experiments,”
Optica, 4
(7), 786
–794
(2017). https://doi.org/10.1364/OPTICA.4.000786 Google Scholar
N. V. Petrov, M. V. Volkov and V. G. Bespalov,
“Phase retrieval of THz radiation using set of 2D spatial intensity measurements with different wavelengths,”
Proc. SPIE, 8281 82810J
(2012). https://doi.org/10.1117/12.909888 PSISDG 0277-786X Google Scholar
BiographyVladimir Katkovnik is a senior research fellow of Tampere University of Technology. His research interests include stochastic signal processing, linear and nonlinear filtering, nonparametric estimation, imaging, nonstationary systems, and time-frequency analysis. He published six books and over 300 papers. He is a member of SPIE. Igor Shevkunov received his PhD in 2013 and is the author of more than 30 papers. His main research interests are digital holography, interferometry, atomic and molecular physics, and lasers. Nikolay V. Petrov is a senior researcher and associate professor in the Department of Photonics and Optoinformatics, ITMO University, Russia. He received his PhD in 2011 and is the author of more than 50 papers. His main research interests are holography, interferometry, femtosecond optics, and terahertz imaging. |