31 October 2016 Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry
Author Affiliations +
Abstract
This paper develops wave-optics simulations which explore the estimation accuracy of digital-holographic detection for wavefront sensing in the presence of distributed-volume or “deep” turbulence and detection noise. Specifically, the analysis models spherical-wave propagation through varying deep-turbulence conditions along a horizontal propagation path and formulates the field-estimated Strehl ratio as a function of the diffraction-limited sampling quotient and signal-to-noise ratio. Such results will allow the reader to assess the number of pixels, pixel field of view, pixel-well depth, and read-noise standard deviation needed from a focal-plane array when using digital-holographic detection in the off-axis image plane recording geometry for deep-turbulence wavefront sensing.

1.

Introduction

Digital-holographic detection shows distinct potential for applications that involve wavefront sensing in the presence of deep turbulence. As shown in Fig. 1, the use of digital-holographic detection in the off-axis image plane recording geometry (IPRG) provides access to an estimate of the amplitude and wrapped phase (i.e., the complex field) that exist in the exit-pupil plane of the imaging system. From the complex-field estimate, we can then pursue a multitude of applications such as atmospheric characterization,1 free-space laser communications,2 and adaptive-optics phase compensation.3

Fig. 1

A description of digital-holographic detection in the off-axis IPRG. Here, a highly coherent master-oscillator (MO) laser is split into two optical trains. The first optical train actively illuminates an unresolved cooperative object. Analogously, the second optical train creates an off-axis local oscillator (LO), so that tilted-spherical-wave illumination is incident on an FPA. The spherical-wave reflections from an unresolved cooperative object then back propagate through deep-turbulence conditions, and upon being imaged onto the FPA coherently interfere with the tilted-spherical-wave illumination from the off-axis LO. In turn, the recorded interference pattern on the FPA is known as a digital hologram, and upon taking a 2-D IFFT, we can obtain an estimate of the wrapped phase (and amplitude) that exists in the exit-pupil plane of the imaging system.

OE_56_3_031213_f001.png

The published literature often makes use of digital-holographic detection in the off-axis pupil plane or on-axis phase shifting recording geometries;4 however, in terms of simplicity, the off-axis IPRG shown in Fig. 1 offers a nice combination of functionality.5 For instance, when considering digital-holographic detection for applications that involve deep-turbulence wavefront sensing, the off-axis IPRG allows for the following multifunction capabilities.

  • Incoherent imaging through passive illumination of an object.

  • Coherent imaging through active illumination of an object.

  • Digital-holographic detection through the interference of a signal with a reference.

  • Estimation of the amplitude and wrapped phase via a two-dimensional (2-D) inverse fast Fourier transform (IFFT) of the hologram irradiance recorded on the focal-plane array (FPA).

From a beam-control stand point,6 the multifunction capabilities listed above allow for a robust user interface which is not limited to wavefront sensing in the presence of an unresolved cooperative object (cf. Fig. 1). In practice, digital-holographic detection allows for the estimation of the complex field in the presence of an extended noncooperative object via speckle averaging and image sharpening algorithms or the angular diversity created by using multiple transmitters and receivers.78.9.10.11.12.13.14.15.16.17.18 This versatility allows for long-range imaging,19 three-dimensional imaging,20 laser radar,21 and synthetic-aperture imaging.22 In general, the applications are abundant.23,24

With wavefront-sensing applications in mind, the presence of deep turbulence tends to be the “Achilles’ heel” to modern-day solutions [e.g., the Shack–Hartmann wavefront sensor (WFS),25 which provides access to localized wavefront slope estimates]. This is said because coherent-light propagation through deep turbulence causes scintillation, which manifests as time-varying constructive and destructive interference between the object and receiver planes. The log-amplitude variance, which is also referred to as the Rytov number, gives a measure for the strength of the scintillation experienced by the coherent light. As the log-amplitude variance grows above 0.25 (for a spherical wave), total-destructive interference gives rise to branch points in both the coherent light transmitted to the object and the coherent light received from the object. These branch points add a rotational component to the phase function that traditional-least-squares phase reconstruction algorithms cannot account for within the analysis. As such, the rotational component is often referred to as the “hidden phase” due to the foundational work of Fried.26

In converting local wavefront slope estimates into unwrapped phase, the hidden phase gets mapped to the null space of traditional-least-squares phase reconstruction calculations. In turn, the unwrapped phase (i.e., the irrotational component) does not contain the branch points and associated branch cuts, which are unavoidable 2π phase discontinuities within the phase function.27 Note that branch-point-tolerant phase reconstruction algorithms do exist within the published literature;2829.30.31 however, the performance of these algorithms needs to be quantified in hardware.32

In addition to causing scintillation, the horizontal, low-altitude, and long-range propagation paths that are reminiscent of deep-turbulence conditions can also lead to increased extinction. This outcome results in reduced transmittance due to molecular and aerosol absorption and scattering all along the propagation path.33,34 In turn, we can concisely say that scintillation and extinction simply lead to low signal-to-noise ratios (SNRs) when performing deep-turbulence wavefront sensing. This is said because scintillation and extinction result in total-destructive interference and light-efficiency losses, respectively, over the field of view (FOV) of the WFS.

Provided enough signal, there are interferometric wavefront-sensing techniques that perform well in the presence of deep turbulence (e.g., the point-diffraction and self-referencing interferometers,35,36 which create a reference by amplitude splitting and spatially filtering the received signal); however, in using these techniques, we cannot realistically approach a shot-noise-limited detection regime. In turn, digital-holographic detection offers a distinct way forward to combat the low SNRs caused by scintillation and extinction. In using digital-holographic detection, we can set the strength of the reference so that it boosts the signal above the read-noise floor of the FPA.

This paper explores the estimation accuracy of digital-holographic detection in the off-axis IPRG for wavefront sensing in the presence of deep turbulence and detection noise. As shown in Fig. 1, the analysis uses an ideal point-source beacon in the object plane to represent the active illumination of an unresolved cooperative object. The resulting spherical wave propagates along a horizontal propagation path through the deep-turbulence conditions that are of interest in this paper. In what follows, Sec. 2 reviews the setup and exploration of the problem space described above in Fig. 1. Section 3 then provides results with discussion, and Sec. 4 concludes this paper. Before moving on to the next section, it is important to note that a lot of the simulation framework used in this paper originates from an earlier conference paper by Spencer et al.37 It is our belief that this paper greatly extends upon the work contained in Ref. 37 by including the deleterious effects of detection noise within the analysis.

2.

Setup and Exploration

This section discusses the setup and exploration needed for a series of computational wave-optics experiments which identify the performance of digital-holographic detection in the off-axis IPRG for wavefront sensing in the presence of deep turbulence and detection noise. The analysis uses many of the principles taught by Schmidt and Voelz in relatively recent SPIE Press publications.38,39 In addition, the analysis uses MATLAB® with the help of AOTools and WaveProp.40,41 The Optical Sciences Company (tOSC) created these robust MATLAB® toolboxes specifically for wave-optics simulations of this nature.

As shown in Fig. 1, the goal for the following analysis is to model digital-holographic detection in the off-axis IPRG for the purposes of deep-turbulence wavefront sensing. With Fig. 1 in mind, we need to further define the experimental parameter space. To help orient the reader, Fig. 2 pictorially shows the various planes of interest within the analysis. Note that the entrance-pupil plane effectively collimates the propagated light from the object plane, whereas the exit-pupil plane effectively focuses the propagated light to form the image plane at focus.

Fig. 2

A description of the experimental parameter space used within the computational wave optics experiments.

OE_56_3_031213_f002.png

2.1.

Model Setup and Exploration

Provided Fig. 2 and Appendix A, we can determine the 2-D Fourier transformation of the hologram photoelectron density DH(x2,y2) as

(1)

D˜H(x1λf,y1λf)=ηThνwxsinc(wxλfx1)wysinc(wyλfy1)[1λ2f2US(x1,y1)*US*(x1,y1)+|AR|2δ(x1)δ(y1)+AR*ejkfjλfUS(x1xR,y1yR)ARejkfjλfUS*(x1+xR,y1+yR)]*comb(xsλfx1)comb(ysλfy1)*Nxsλfsinc(Nxsλfx1)Mysλfsinc(Mysλfy1),
in units of photoelectrons (pe). This result is remarkably physical, as the sampling theorem dictates that a sampled function becomes periodic upon finding its spectrum.42,43 Through 2-D convolution with the separable comb functions and the convolution-sifting property of the impulse function, the terms contained within square brackets in Eq. (1) are repeated at intervals of λf/xs and λf/ys along the x and y axes, respectively. Thus, the final 2-D convolution with the separable narrow sinc functions serves to smooth out these repeated terms, whereas the amplitude modulation with the separable broadened sinc functions serves to dampen out these repeated terms.

To help simplify the analysis to a case that we can easily simulate using N×N computational grids, let us assume that the FPA has adjacent square pixels, so that xs=ys=wx=wy=wp. In so doing, we can rewrite Eq. (1) in terms of the diffraction-limited sampling quotient QI, where

(2)

QI=λfD1wp.
Physically, there are multiple ways to think about the relationship given in Eq. (2). One way is to say that the diffraction-limited sampling quotient QI is a measure of the number of FPA pixels across the diffraction-limited half width of the incoherent point-spread function (PSF). Remember that for linear shift-invariant imaging systems, the incoherent PSF is the irradiance associated with an imaged point source [i.e., the squared magnitude of Eq. (25) in Appendix A].38 Another way to think about the diffraction-limited sampling quotient, QI, is to say that it is a measure of the number of diffraction angles, λ/D1, per pixel FOV, wp/f, assuming small angles. In turn, the relationship given in Eq. (2) allows us to vary the sampling with the FPA pixels.

Using Eq. (2), we can rewrite Eq. (1) in terms of the diffraction-limited sampling quotient QI, such that

(3)

D˜H(x1λf,y1λf)=ηThνwpsinc(x1QID1)wpsinc(y1QID1)[1λ2f2US(x1,y1)*US*(x1,y1)+|AR|2δ(x1)δ(y1)+AR*ejkfjλfUS(x1xR,y1yR)ARejkfjλfUS*(x1+xR,y1+yR)]*comb(x1QID1)comb(y1QID1)*NQID1sinc(x1QID1/N)NQID1sinc(y1QID1/N).
Here, QID1=λf/wp is the side length of the N×N computational grid in the Fourier plane. Note that as N [cf. Eq. (37) in Appendix A], we can make use of the convolution-sifting property of the impulse function and neglect the final 2-D convolution in Eq. (3). Accordingly, for large N the smoothing becomes minimized; however, for small N the smoothing becomes more pronounced. Let us assume that xR=yR=QID1/4, so that the last two terms within the square brackets in Eq. (3) shift diagonally. When QI4, the last two terms no longer overlap with the first two terms which are centered on axis. Correspondingly, when 2QI<4, the last two terms are still resolvable within the side length of the N×N computational grid but overlap with the first term. Provided that N is constant, this latter case allows for us to obtain more samples across the exit-pupil diameter D1, which in turn minimizes the smoothing caused by the final 2-D convolution in Eq. (3). If the amplitude of the reference is set to be well above the amplitude of the signal (i.e., |AR||AS|), then this functional overlap becomes negligible—a fundamental result obtained in Ref. 37.

Provided Eq. (3), we must use a window function w(x1,y1) to obtain an estimate U^S(x1,y1) of the desired signal complex field US(x1,y1) [cf. Fig. 2 and Eq. (26) in Appendix A]. Specifically,

(4)

U^S(x1,y1)=w(x1,y1)D˜H(x1λf,y1λf).
In using Eq. (4), we must satisfy Nyquist sampling with the FPA pixels,42 so that the repeated terms within Eq. (3) do not overlap and cause significant aliasing. As such, the Nyquist rate is QID1=λf/wp and the Nyquist interval is 1/QID1=wp/λf when xR=yR=QID1/4. Assuming that N, QI2, |AR||AS|, and

(5)

w(x1,y1)=cyl[(x1xR)2+(y1yR)2D1]ηThνAR*ejkfwp2jλfsinc(x1QID1)sinc(y1QID1),
Eq. (4) simplifies, such that

(6)

U^S(x1,y1)US(x1,y1).
In turn, there is a distinct trade space found in using Eq. (3). We will explore this trade space in the presence of deep turbulence and detection noise in the analysis to come.

Before moving on to the simulation setup and exploration, it is informative to develop a closed-form expression for the analytical SNR. For this purpose, we can approximate the estimated signal power P^S as

(7)

P^Sm¯Rm¯S,
where

(8)

m¯R=ηThν|AR|2wp2
is the mean number of reference photoelectrons detected per pixel and

(9)

m¯S=ηThν|AS|2wp2
is the mean number of signal photoelectrons detected per pixel. Now we need to account for the estimated noise power P^N.

Pixel to pixel, the FPA creates photoelectrons via statistically independent (i.e., delta correlated) and zero-mean random processes, so that the variance σ2 is equivalent to the noise power. Here,

(10)

σ2=m¯S+m¯R+m¯B+σC2,
where mB is the mean number of photoelectrons associated with the background illumination (e.g., from passive illumination from the sun) and σC2 is the variance associated with pixel read noise (i.e., the FPA circuitry). In writing Eq. (10), note that we assume the use of a Poisson-distributed random process for the various sources of illumination that are incident on the FPA. In so doing, the mean number of photoelectrons is equal to the variance of the photoelectrons.44,45 Also note that we assume the use of a Gaussian-distributed random process for the various sources of pixel read noise in the FPA.

Provided Eq. (10), the estimated noise power P^N follows from the noise variance σ2 as

(11)

P^N=Rσ2,
where

(12)

R=π4QI2
is the ratio of the area associated with the window function w(x1,y1) to the area associated with the side length QID1=λf/wp of the N×N computational grid in the Fourier plane. The analytical SNR then follows from Eqs. (7)–(12) as

(13)

SNR=P^SP^N=4QI2πm¯Sm¯Rm¯S+m¯R+m¯B+σC2.
We will validate the use of this closed-form expression in the simulation setup and exploration to follow.

2.2.

Simulation Setup and Exploration

For all of the computational wave-optics experiments presented throughout this paper, we used N×N computational grids. For example, to simulate the propagation of an ideal point-source beacon though deep-turbulence conditions, we used 4096×4096 grid points and the split-step beam propagation method (BPM).3839.40.41 WaveProp and AOTools made use of a very narrow sinc function with a raised-cosine envelope to simulate an ideal point-source beacon. The sampling of this function and the object-plane side length was automatically set, so that after propagation from the object plane to the entrance-pupil plane, the illuminated region of interest was half the user-defined, entrance-pupil plane side length (cf. Fig. 2). Put another way, the simulations satisfied Fresnel scaling [i.e., N=S1S2/(λZ), where S1=16D1 and S2 are the object and entrance-pupil side lengths, respectively]. Altogether, this provided an entrance-pupil plane side length of D1 after cropping out the center 256×256 grid points. As mentioned previously, using ideal thin lenses the entrance-pupil plane effectively collimated the propagated light from the object plane, whereas the exit-pupil plane effectively focused the propagated light to form the image plane at focus (cf. Fig. 2).

As listed in Table 1, we used five different horizontal-path scenarios to create the deep-turbulence trade space of interest in this paper. Provided the index of refraction structure parameter Cn2, we determined the log-amplitude variances for a plane wave, σχpw2, and a spherical wave, σχsw2, using the following equations:34

(14)

σχpw2=0.307k7/6Z11/6Cn2
and

(15)

σχsw2=0.124k7/6Z11/6Cn2,
where k=2π/λ is again the angular wavenumber, λ=1  μm is the wavelength, and Z=7.5  km is the propagation distance (cf. Fig. 2). In addition, we determined the coherence diameters for a plane wave, r0pw, and a spherical wave, r0sw, using the following equations:34

(16)

r0pw=0.185(λ2ZCn2)3/5
and

(17)

r0sw=0.33(λ2ZCn2)3/5.
Based on Eqs. (14)–(17), the computational wave-optics experiments used 10 phase screens with equal spacing to simulate the propagation of an ideal point-source beacon through deep-turbulence conditions using the BPM. This choice provided low percentage errors (less than 0.5%) between the continuous and discrete calculations using Eqs. (14)–(17).38

Table 1

The deep-turbulence trade space of interest in this paper. Remember that the log-amplitude variance σχ2, which is also referred to as the Rytov number, gives a measure for the strength of the scintillation. As the σχ2 grows above ∼0.25 (for a spherical wave), scintillation gives rise to branch points in the phase function. Also remember that the coherence diameter r0, which is also referred to as the Fried parameter, gives a measure for the achievable imaging resolution. As the ratio of exit-pupil diameter D1 to r0 grows above ∼4 (for a spherical wave), higher-order aberrations beyond tilt start to limit the achievable imaging resolution. Here, D1=30 cm.

Scenario12345
Cn2(m2/3)1.00×10151.50×10152.00×10152.50×10153.00×1015
σχsw20.1350.2020.2700.3370.404
σχpw20.3330.5000.6670.8331.00
r0sw(cm)9.927.786.555.735.14
r0pw(cm)5.514.323.633.182.85

Propagation to the image plane from the exit-pupil plane occurred via a three-step process using WaveProp and AOTools: (1) by doubling the number of N×N grid points in the exit-pupil plane with a side length of D1 from 256×256 grid points to 512×512 grid points via zero padding; (2) numerically solving the convolution form of the Fresnel diffraction integral via 2-D FFTs; and (3) cropping out the center 256×256 grid points, so that f=QID12/(256λ) (i.e., the image plane side length was equal to the exit-pupil plane side length). As shown in Fig. 3, by varying the diffraction-limited sampling quotient, QI, the number of FPA pixels across the diffraction-limited imaging bucket, D2, also varied. Here, D2=2.44λf/D1 with D1=30  cm.

Fig. 3

(a, b) The normalized signal (c, d) and normalized digital hologram, in the image plane for a constant SNR, where the analytical SNR is 20. As the diffraction-limited sampling quotient, QI, increases, the number of FPA pixels contained within the diffraction-limited imaging diameter, D2 (white circles), increases proportionally. Note that the results presented here contain no aberrations.

OE_56_3_031213_f003.png

For all of the computational wave-optics experiments presented in this paper (including those contained in Fig. 3), we set the pixel read-noise standard deviation to 100 pe and the pixel well depth to 100×103 pe. To simulate different SNRs [cf. Eq. (13)], we neglected to include background-illumination effects, and we set the amplitude of the reference |AR| to produce a mean number of reference photoelectrons detected per pixel equal to 25% of the pixel well depth (i.e., m¯B=0 and m¯R=25×103 pe) [cf. Eq. (8)]. We then scaled the amplitude of the signal |AS| to have the appropriate mean number of signal photoelectrons m¯S detected per pixel [cf. Eq. (9)]. As such, the standard deviation of the shot noise varied within the simulations and was the dominate source of detection noise.

Remember that in the IPRG (cf. Figs. 1 and 2), digital-holographic detection provides access to an estimate of the amplitude and wrapped phase (i.e., the complex field) that exist in the exit-pupil plane of the imaging system. We obtained access to this complex-field estimate using the following steps: (1) within the image plane, interfering the signal with the reference [cf. Eq. (29) in Appendix A]; (2) recording the hologram irradiance on the FPA to create a digital hologram with Poisson-distributed shot noise and Gaussian-distributed pixel read noise; (3) taking the 2-D IFFT of the digital hologram to go to the Fourier plane; and (4) within the Fourier plane, windowing the off-axis complex-field estimate. To perform an apples-to-apples comparison, we kept the total FOV constant and varied the number of pixels N across the FPA, such that

(18)

N=FOVfwp=FOVQID1λ,
where FOV=64λ/D1. This choice ensured that we had the same number of pixels and effective detection noise across our complex-field estimates despite the fact that we varied the diffraction-limited sampling quotient QI within the computational wave-optics experiments. Here again, f=QID12/(256λ) was the focal length and Q1D1=λf/wp was the side length.

To generate results for the entire deep-turbulence trade space (cf. Table 1), we used the field-estimated Strehl ratio SF, such that

(19)

SF=|US(x1,y1)U^S*(x1,y1)|2|US(x1,y1)|2|U^S(x1,y1)|2,
where US(x1,y1) and U^S(x1,y1) are the “truth” and “estimated” signal complex fields, respectively, and denotes mean. This performance metric bears some resemblance to a Strehl ratio, which in practice provides a normalized measure for performance. In Eq. (19), if US(x1,y1)=U^S(x1,y1), then SF=1. Else if US(x1,y1)U^S(x1,y1), then SF<1. Thus, Eq. (19) is copasetic with the general understanding of a Strehl ratio and provides a normalized measure for field-estimated performance. Note that Eq. (19) ultimately stems from the following definition of the on-axis Strehl ratio:40,41

(20)

S=|US(x1,y1)|2|US(x1,y1)|2.
Here, we have made use of the fact that the mean of a pupil-plane quantity is equivalent to the on-axis DC term of the 2-D Fourier transformation of that pupil-plane quantity.

Shown in Figs. 4(a) and 4(b) is the wrapped phase and in Figs. 4(c) and 4(d) the normalized amplitude in the Fourier plane for one independent realization of scenario 5 in Table 1 and detection noise. In Fig. 4, one can identify the complex-field estimate within the white circles of diameter D1. Specifically, as the diffraction limited sampling quotient QI increases, so does the side length of the Fourier plane; however, the exit-pupil diameter D1 remains constant. By windowing the data found within the white circles in Fig. 4, we obtained the results shown in Fig. 5. Here, we see that as the diffraction limited sampling quotient, QI, increases, the field-estimated Strehl ratio, SF, decreases.

Fig. 4

(a, b) The wrapped phase and (c, d) normalized amplitudes associated with the Fourier plane for a constant SNR, where the analytical SNR is 20 and the numerical SNR is 21.5. In general, the Fourier plane contains the complex-field estimate (i.e., an estimate of the amplitude and wrapped phase that exists in the exit-pupil plane of the imaging system). The results show that as the diffraction-limited sampling quotient, QI, increases, the complex-field estimates contained within an exit-pupil diameter, D1 (white circles), take up less and less space within the Fourier plane because the side length of the Fourier plane, QID1, increases proportionally.

OE_56_3_031213_f004.png

Fig. 5

(a) The wrapped-phase truth and (b–d) wrapped-phase estimates for a constant SNR, where the analytical SNR is 20 and the numerical SNR is 21.5. In general, by windowing out the appropriate data in the Fourier plane (white circles in Fig. 4), we obtain the complex-field estimate (i.e., an estimate of the amplitude and wrapped phase that exists in the exit-pupil plane of the imaging system). The results contained in (a–d) show that as the diffraction-limited sampling quotient, QI, increases, the field-estimated Strehl ratio, SF, decreases ever so slightly.

OE_56_3_031213_f005.png

To determine the numerical SNR presented in Figs. 4 and 5, we performed the following steps using the numerical data found in Figs. 4(b) and 4(d) corresponding to a diffraction limited sampling quotient of QI=4.

  • Using the numerical data contained in the bottom-right circle, we computed the mean of the squared magnitude of the complex-field estimate to numerically determine the estimated signal power plus the noise power P^S+N [cf. Eqs. (7) and (11)].

  • Next, using the numerical data contained in the bottom-left circle, we computed the mean of the squared magnitude of the detection noise to numerically determine the estimated noise power P^N.

  • Subtracting the first calculation from the second, we numerically determined the estimated signal power, so that P^S=P^S+NP^N.

  • The numerically determined SNR then followed as SNR=P^S/P^N.

We also used these steps to validate the use of the closed-form expression contained in Eq. (13). For this purpose, Fig. 6 presents percentage error results as a function of the analytical SNR. In Fig. 6, we averaged the results obtained from 20 independent realizations of scenarios 1 and 5 in Table 1 and 20 independent realizations of detection noise. Note that the error bars depict the width of the standard deviation. Also note that we only used numerical data corresponding to a diffraction limited sampling quotient of QI=4, so that there was no functional overlap contained within the results [cf. Eq. (3)].

Fig. 6

The average percentage error as a function of the analytical SNR for the deep-turbulence trade space presented in Table 1. Here, the results show that as the analytical SNR increases, the average percentage error decreases between the numerical and analytical SNRs. Note that the error bars depict the width of the standard deviation for 400 realizations.

OE_56_3_031213_f006.png

The analysis used multiple image-processing tricks to obtain the results presented in Figs. 36. With that said, the first image-processing trick was to subtract the mean from the recorded digital hologram. This removed the on-axis DC term from the numerical data contained in the Fourier plane. Next, the analysis applied a raised cosine window to the zero-mean digital hologram with eight-pixel-wide tapers at the edges of the FPA. This combined with zero-padding helped to mitigate the effects of aliasing from using N×N computational grids and 2-D IFFTs.3839.40.41 In practice, the analysis zero-padded the windowed zero-mean digital hologram to ensure that the complex-field estimate in the Fourier plane contained 256×256 grid points within the exit-pupil diameter D1. This outcome provided the same number of grid points as the exit-pupil plane for the sake of computing the field-estimated Strehl ratio SF with the “truth” complex field [cf. Eq. (19)]. Note that these image-processing tricks also apply to the results presented in the next section.

3.

Results

Figure 7 shows field-estimated Strehl ratio SF results as a function of the diffraction-limited sampling quotient QI. Here, we averaged the results obtained from 20 independent realizations of scenarios 1 to 5 in Table 1 and 20 independent realizations of detection noise. In Fig. 7, the error bars depict the width of the standard deviation. With this in mind, the analytical SNR increases from 1 in Fig. 7(a) to 10, 20, and 100 in Fig. 7(b), 7(c), and 7(d), respectively [cf. Eq. (13)]. Note that as the analytical SNR increases, the performance trends flip flop. This outcome is due to functional overlap introducing additional shot noise into the complex-field estimate when 2QI<4. As QI increases, this functional overlap decreases and the additional shot noise plays less of a role depending on the amount of smoothing [cf. Eq. (3)].

Fig. 7

The average field-estimated Strehl ratio, SF, as a function of the diffraction-limited sampling quotient, QI, for the deep-turbulence trade space presented in Table 1. Here, the analytical SNR increases from 1 in (a) to 10, 20, and 100 in (b–d), respectively. The results contained in (a) and (b) show that as the diffraction-limited sampling quotient, QI, increases, the average field-estimated Strehl ratio, SF, decreases (i.e., for low SNRs, lower QI’s perform better). In contrast, the results contained in (c) and (d) show that as the diffraction-limited sampling quotient, QI, increases, the average field-estimated Strehl ratio, SF, increases (i.e., for high SNRs, higher QI’s perform better). Note that the error bars depict the width of the standard deviation for 400 realizations.

OE_56_3_031213_f007.png

The results shown in Fig. 7 do not agree with the results presented in Ref. 37. This is said because the performance trends are opposite of those found in Ref. 37, particularly for high SNRs. Regardless of the strength of the aberrations, Ref. 37 showed that for a constant number of pixels N across the FPA, the average SF values are always greatest given QI=2. In general, lower QI’s provide more samples across the complex-field estimate, which in turn minimizes the smoothing caused by the final 2-D convolution in Eq. (3). The results presented in Ref. 37, however, did not include the deleterious effects of detection noise.

In the presence of detection noise, lower QI’s also increase the detection-noise sampling, which in turn degrades the complex-field estimate. To combat this effect, we chose to vary the number of pixels N across the FPA to keep the total FOV constant [cf. Eq. (18)]. With respect to Fig. 7, this choice decreases the amount of detection-noise sampling for lower QI’s but increases the amount of smoothing caused by the final 2-D convolution in Eq. (3).

Remember that if the amplitude of the reference is set to be well above the amplitude of the signal (i.e., |AR||AS|), then the functional overlap in Eq. (3) becomes negligible when 2QI<4. With that said, Ref. 37 set the amplitude of the reference to be 10 times that of the signal (i.e., |AR|2=100  W/m2 and |AS|2=1  W/m2) [cf. Eqs. (8) and (9)]. Radiometrically speaking, both of these values are impractical given the capabilities of modern-day, high-framerate, and short-wave-infrared (SWIR) FPAs. As such, the results presented in Fig. 7 tell the true story and the results presented in Ref. 37 tell the story given infinite SNR. Note that we would extend our results out to those obtained in Ref. 37; however, given the parameters of our FPA, we empirically determined that pixel saturation nominally occurs for analytical SNRs greater than 250 [cf. Eq. (13)]. This outcome occurs because of deep-turbulence scintillation (i.e., hotspots due to constructive interference).

The results presented in Fig. 7 ultimately show less than 5% variation in the SF values for the different QI values within each plot. In terms of efficiently using the FPA pixels, the reader might conclude that there are distinct benefits to operating at lower QI’s despite the minor (5%) performance penalty at high SNRs. Before moving on to the next section, it is important to note that provided different FPA parameters, such as a larger pixel well depth, the results presented in Fig. 7 might change; however, the parameters chosen for our FPA are indicative of modern-day, high-framerate, and SWIR FPAs.

4.

Conclusion

The results presented in this paper serve two purposes. The first purpose is to validate the setup and exploration presented in Sec. 2. In turn, the second purpose is to allow the reader to assess the number of pixels, pixel FOV, pixel-well depth, and read-noise standard deviation needed from an FPA when using digital-holographic detection in the off-axis IPRG for deep-turbulence wavefront sensing.

Digital-holographic detection, in general, offers a distinct way forward to combat the low SNRs caused by scintillation and extinction, and it is our belief that the analysis presented throughout this paper shows that this statement is true. In using digital-holographic detection, we can set the strength of the reference so that it boosts the signal above the read-noise floor of the FPA. As such, we can approach a shot-noise-limited detection regime. This last statement is of course dependent on the parameters of the FPA, such as the pixel well depth. Nevertheless, given that scintillation and extinction lead to low SNRs, it is important that we reach the shot-noise limit in order to better perform deep-turbulence wavefront sensing. This outcome will allow future research efforts to better explore the associated branch-point problem.

Appendices

Appendix A

Using the convolution form of the Fresnel diffraction integral (cf. Fig. 2), we can represent the signal complex field US(x2,y2) incident on the FPA as

(21)

US(x2,y2)=ejkfjλfUS+(x1,y1)exp{jk2f[(x2x1)2+(y2y1)2]}dx1dy1,
where US+(x1,y1) is the signal complex field leaving the exit-pupil plane. Specifically,

(22)

US+(x1,y1)=US(x1,y1)TP(x1,y1),
where US(x1,y1) is the signal complex field incident on the exit-pupil plane, and

(23)

TP(x1,y1)=cyl(x12+y12D1)exp[jk2f(x12+y12)]
is the complex transmittance function of the exit-pupil plane (i.e., a circular aperture placed against a thin lens). In Eq. (23),

(24)

cyl(ρ1)={10.500ρ1<0.5ρ1=0.5ρ1>0.5
is a cylinder function where ρ1=x12+y12, D1 is the exit-pupil diameter, k=2π/λ is the angular wavenumber, λ is the wavelength, and f is the focal length. Substituting Eq. (22) into Eq. (21) we arrive at the following result:

(25)

US(x2,y2)=ejkfjλfexp[jk2f(x22+y22)]F{US(x1,y1)}νx=x2λf,νy=y2λf,
where

(26)

US(x1,y1)=US(x1,y1)cyl(x12+y12D1)
is the signal complex field that exists in the exit-pupil plane of the imaging system (cf. Fig. 2), and F{}vx,vy denotes a 2-D Fourier transformation, such that

(27)

V˜(νx,νy)=F{V(x,y)}νx,νy=V(x,y)ej2π(xνx+yνy)dxdy.
A 2-D inverse Fourier transformation then follows as

(28)

V(x,y)=F1{V˜(νx,νy)}x,y=V˜(νx,νy)ej2π(xνx+yνy)dνxdνy.
With Fig. 2 in mind, we can also represent the reference complex field UR(x2,y2) incident on the FPA as resulting from the Fresnel approximation to a tilted spherical wave. Here,

(29)

UR(x2,y2)=ARexp[jk2f(x22+y22)]exp(j2πxRx2λf)exp(j2πyRy2λf),
where AR is a complex constant and (xR,yR) are the coordinates of the off-axis local oscillator, which is located in the exit-pupil plane.

Provided Eqs. (25)–(29), we can determine the hologram irradiance IH(x2,y2) incident on the FPA as

(30)

IH(x2,y2)=|US(x2,y2)+UR(x2,y2)|2
in units of Watts per square meter (W/m2). For all intents and purposes, the FPA will convert the hologram irradiance IH(x2,y2), which is in an analog form, into a form that is suitable for digital image processing. Following the approach taken by Gaskill,42 let us assume that “digitization” is to take place at sampling intervals of xs and ys, which are the x- and y-axes pixel pitches of the FPA (cf. Fig. 2). At any particular pixel, we can then estimate the hologram irradiance IH(x2,y2) by computing its average value over the active area of a pixel, which is centered at x2=nxs and y2=mys, where n=1 to N and m=1 to M. Specifically,

(31)

I^H(nxs,mys)=IH(x2,y2)1wxrect(x2nxswx)1wyrect(y2myswy)dx2dy2,
where wx and wy are, respectively, the x- and y-axes pixel widths of the FPA, and

(32)

rect(x)={0,|x|>0.50.5,x=0.51,|x|<0.5
is a rectangle function.

Neglecting the effects of pixel edge diffusion in the FPA,4 remember that the number of hologram photoelectrons mH(nxs,mys), at any particular pixel and time interval, is a random process with mean,44

(33)

m¯H(nxs,mys)=ηThνI^H(nxs,mys)wxwy.
Here, η is the quantum efficiency of the FPA, T is the integration time of the FPA, hν is the quantized photon energy, and the quantity, wxwy, is the active area of a pixel. Over the entire FPA, it then follows that the hologram photoelectron density DH(x2,y2), in units of photoelectrons per square meter (pe/m2), is simply a sampled version of the analog form of Eq. (33). This declaration leads to the following expressions:

(34)

DH(x2,y2)=m¯H(x2,y2)1xscomb(x2xs)1yscomb(y2ys)rect(x2Nxs)rect(y2Mys),
where

(35)

m¯H(x2,y2)=ηThνIH(x2,y2)rect(x2x2wx)rect(y2y2wy)dx2dy2=ηThνIH(x2,y2)rect(x2x2wx)rect(y2y2wy)dx2dy2=ηThνIH(x2,y2)*rect(x2wx)rect(y2wy)
is the analog form of Eq. (33),

(36)

1|w|comb(xw)=n=δ(xnw)
is a scaled comb function,

(37)

δ(xx)=limw01|w|p(xxw)
is an impulse function,43 and p(x) is a pulse-like function {e.g., the rectangle function [cf. Eq. (32)]}. Note that in Eq. (35), * denotes 2-D convolution, such that

(38)

V(x,y)*W(x,y)=V(x,y)W(xx,yy)dxdy,
where x and y are dummy variables of integration.

From Eqs. (30)–(38), we can gain access to an estimate of the signal complex field US(x1,y1) that exists in the exit-pupil plane of the imaging system [cf. Fig. 2 and Eq. (26)]. First, we let x2=λfνx and y2=λfνy and apply a 2-D inverse Fourier transformation to Eq. (34), such that

(39)

F1{DH(λfνx,λfνy)}x1,y1=1λ2f2D˜H(x1λf,y1λf)=ηThνF1{I˜H(λfνx,λfνy)}x1,y1wxsinc(wxλfx1)wysinc(wyλfy1)*1λfcomb(xsλfx1)1λfcomb(ysλfy1)*Nxsλfsinc(Nxsλfx1)Mysλfsinc(Mysλfy1),
where sinc(x)=sin(πx)/(πx) is a sinc function. Taking a look at the remaining 2-D inverse Fourier transformation in Eq. (39), we obtain the following relationship:

(40)

F1{I˜H(λfνx,λfνy)}x1,y1=F1{|US(λfνx,λfνy)|2}x1,y1+F1{|UR(λfνx,λfνy)|2}x1,y1+F1{US(λfνx,λfνy)UR*(λfνx,λfνy)}x1,y1+F1{UR(λfνx,λfνy)US*(λfνx,λfνy)}x1,y1,
where the superscript * denotes complex conjugate. From Eqs. (25) and (29), it then follows that

(41)

F1{I˜H(λfνx,λfνy)}x1,y1=1λ2f2US(x1,y1)*US*(x1,y1)+|AR|2δ(x1)δ(y1)+AR*ejkfjλfUS(x1xR,y1yR)ARejkfjλfUS*(x1+xR,y1+yR).
The first term in Eq. (41) is nothing more than a scaled 2-D autocorrelation of the desired signal complex field US(x1,y1). This term is centered on axis and is physically twice the circumference of the exit-pupil diameter D1. The second term in Eq. (41) is also centered on axis and contains separable impulse functions [cf. Eq. (37)]. These impulse functions are at the strength of the uniform irradiance associated with the reference (i.e., |AR|2). The last two terms in Eq. (41) form complex conjugate pairs and contain the desired signal complex field US(x1,y1), both scaled and shifted off axis by the coordinates (xR,yR).

Substituting Eq. (41) into Eq. (39), we obtain the following result after rearranging the special functions:

(42)

D˜H(x1λf,y1λf)=ηThνwxsinc(wxλfx1)wysinc(wyλfy1)[1λ2f2US(x1,y1)*US*(x1,y1)+|AR|2δ(x1)δ(y1)+AR*ejkfjλfUS(x1xR,y1yR)ARejkfjλfUS*(x1+xR,y1+yR)]*comb(xsλfx1)comb(ysλfy1)*Nxsλfsinc(Nxsλfx1)Mysλfsinc(Mysλfy1),
in units of photoelectrons (pe). This result is repeated above in Eq. (1).

Acknowledgments

The authors would like to thank Samuel T. Thurman for his careful review and insightful comments toward a draft form of this completed paper. In addition, the authors would like to thank Paul F. McManamon for his invitation to submit to a special section call of Optical Engineering. This research was funded by the High Energy Laser Joint Technology Office. The views expressed in this document are those of the authors and do not necessarily reflect the official policy or position of the Air Force, the Department of Defense, or the U.S. government.

References

1. R. J. Sasiela, Electromagnetic Wave Propagation in Turbulence Evaluation and Application of Mellin Transforms, 2nd ed., SPIE Press, Bellingham, Washington (2007). Google Scholar

2. L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, 2nd ed., SPIE Press, Bellingham, Washington (2005). Google Scholar

3. R. H. Tyson, Principles of Adaptive Optics, 4th ed., CRC Press, Boca Raton, Florida (2016). Google Scholar

4. T.-C. Poon and J.-P. Liu, Introduction to Modern Digital Holography with MATLAB, Cambridge University Press, New York, New York (2014). Google Scholar

5. S. T. Thurman and A. Bratcher, “Multiplexed synthetic-aperture digital holography,” Appl. Opt. 54(3), 559–568 (2015). http://dx.doi.org/10.1364/AO.54.000559 Google Scholar

6. P. Merritt, Beam Control for Laser Systems, Directed Energy Professional Society, Albuquerque, New Mexico (2012). Google Scholar

7. R. A. Muller and A. Buffington, “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am. 64(9), 1200–1210 (1974).JOSAAH0030-3941 http://dx.doi.org/10.1364/JOSA.64.001200 Google Scholar

8. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20(4), 609–620 (2003).JOAOD60740-3232 http://dx.doi.org/10.1364/JOSAA.20.000609 Google Scholar

9. N. J. Miller, M. P. Dierking and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. 46(23), 5933–5943 (2007). http://dx.doi.org/10.1364/AO.46.005933 Google Scholar

10. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25(4), 983–994 (2008).JOAOD60740-3232 http://dx.doi.org/10.1364/JOSAA.25.000983 Google Scholar

11. S. T. Thurman and J. R. Fienup, “Correction of anisoplanatic phase errors in digital holography,” J. Opt. Soc. Am. A 25(4), 995–999 (2008).JOAOD60740-3232 http://dx.doi.org/10.1364/JOSAA.25.000995 Google Scholar

12. A. E. Tippie and J. R. Fienup, “Phase-error correction for multiple planes using a sharpness metric,” Opt. Lett. 34(5), 701–703 (2009).OPLEDP0146-9592 http://dx.doi.org/10.1364/OL.34.000701 Google Scholar

13. A. E. Tippie and J. R. Fienup, “Multiple-plane anisoplanatic phase correction in a laboratory digital holography experiment,” Opt. Lett. 35(19), 3291–3293 (2010).OPLEDP0146-9592 http://dx.doi.org/10.1364/OL.35.003291 Google Scholar

14. D. Rabb et al., “Distributed aperture synthesis,” Opt. Exp. 18(10), 10334–10342 (2010).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.18.010334 Google Scholar

15. D. J. Rabb et al., “Multi-transmitter aperture synthesis,” Opt. Exp. 18(24), 24937–24945 (2010).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.18.024937 Google Scholar

16. D. J. Rabb, J. W. Stafford and D. F. Jameson, “Non-iterative aberration correction of a multiple transmitter system,” Opt. Exp. 19(25), 25048–25056 (2011).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.19.025048 Google Scholar

17. B. G. Gunturk, D. J. Rabb and D. F. Jameson, “Multi-transmitter aperture synthesis with Zernike based aberration correction,” Opt. Exp. 20(24), 26448–26457 (2012).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.20.026448 Google Scholar

18. J. R. Kraczek, P. F. McManamon and E. A. Watson, “High resolution non-iterative aperture synthesis,” Opt. Exp. 24(6), 6229–6239 (2016).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.24.006229 Google Scholar

19. J. C. Marron et al., “Atmospheric turbulence correction using digital-holographic detection: experimental results,” Opt. Exp. 17(14), 11638–11651 (2009).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.17.011638 Google Scholar

20. J. C. Marron et al., “Extended-range digital holographic imaging,” Proc. SPIE 7684, 76841J (2010). http://dx.doi.org/10.1117/12.862559 Google Scholar

21. J. C. Marron and K. S. Schroeder, “Holographic laser radar,” Opt. Lett. 18(5), 385–387 (1993).OPLEDP0146-9592 http://dx.doi.org/10.1364/OL.18.000385 Google Scholar

22. A. E. Tippie, A. Kumar and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Exp. 19(13), 12027–12038 (2011).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.19.012027 Google Scholar

23. W. Osten et al., “Recent advances in digital holography [invited],” Appl. Opt. 53(27), G44–G63 (2014). http://dx.doi.org/10.1364/AO.53.000G44 Google Scholar

24. F. Doval et al., “Propagation of the measurement uncertainty in Fourier transform digital holographic interferometry,” Opt. Eng. 55(12), 121709 (2016). http://dx.doi.org/10.1117/1.OE.55.12.121709 Google Scholar

25. J. D. Barchers, D. L. Fried and D. J. Link, “Evaluation of the performance of Hartmann sensors in strong scintillation,” Appl. Opt. 41(6), 1012–1021 (2002). http://dx.doi.org/10.1364/AO.41.001012 Google Scholar

26. D. L. Fried, “Branch point problem in adaptive optics,” J. Opt. Soc. Am. A 15(10), 2759–2768 (1998).JOAOD60740-3232 http://dx.doi.org/10.1364/JOSAA.15.002759 Google Scholar

27. D. C. Ghiglia and M. D. Pritt, Two-Dimensional Phase Unwrapping Theory, Algorithms, and Software, John Wiley and Sons, New York, New York (1998). Google Scholar

28. J. D. Gonglewski et al., “Coherent image synthesis from wave-front sensor measurements of a nonimaged laser speckle field: a laboratory demonstrations,” Opt. Lett. 16(23), 1893–1895 (1991).OPLEDP0146-9592 http://dx.doi.org/10.1364/OL.16.001893 Google Scholar

29. W. W. Arrasmith, “Branch-point-tolerant least-squares phase reconstructor,” J. Opt. Soc. Am. A 16(7), 1864–1872 (1999).JOAOD60740-3232 http://dx.doi.org/10.1364/JOSAA.16.001864 Google Scholar

30. T. M. Venema and J. D. Schmidt, “Optical phase unwrapping in the presence of branch points,” Opt. Exp. 16(10), 6985–6998 (2008).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.16.006985 Google Scholar

31. M. J. Steinbock, M. W. Hyde and J. D. Schmidt, “LSPV+7, a branch-point-tolerant reconstructor for strong turbulence adaptive optics,” Appl. Opt. 53(18), 3821–3831 (2014). http://dx.doi.org/10.1364/AO.53.003821 Google Scholar

32. M. F. Spencer et al., “Deep-turbulence simulation in a scaled-laboratory environment using five phase-only spatial light modulators,” in Proc. 18th Coherent Laser Radar Conf. (2016). Google Scholar

33. P. E. Nielson, Effects of Directed Energy Weapons, Directed Energy Professional Society, Albuquerque, New Mexico (2009). Google Scholar

34. G. P. Perram et al., An Introduction to Laser Weapon Systems, Directed Energy Professional Society, Albuquerque, New Mexico (2010). Google Scholar

35. J. D. Barchers and T. A. Rhoadarmer, “Evaluation of phase-shifting approaches for a point-diffraction interferometer with the mutual coherence function,” Appl. Opt. 41(36), 7499–7509 (2002). http://dx.doi.org/10.1364/AO.41.007499 Google Scholar

36. T. A. Rhoadarmer, “Development of a self-referencing interferometer wavefront sensor,” Proc. SPIE 5553, 112 (2004).PSISDG0277-786X http://dx.doi.org/10.1117/12.559916 Google Scholar

37. M. F. Spencer et al., “Digital holography wave-front sensing in the presence of strong atmospheric turbulence and thermal blooming,” Proc. SPIE 9617, 961705 (2015).PSISDG0277-786X http://dx.doi.org/10.1117/12.2189943 Google Scholar

38. J. D. Schmidt, Numerical Simulation of Optical Wave Propagation, SPIE Press, Bellingham, Washington (2010). Google Scholar

39. D. G. Voelz, Computational Fourier Optics: a MATLAB Tutorial, SPIE Press, Bellingham, Washington (2010). Google Scholar

40. T. J. Brennan and P. H. Roberts, AOTools the Adaptive Optics Toolbox for Use with MATLAB User’s Guide Version 1.4, the Optical Sciences Company, Anaheim, California (2010). Google Scholar

41. T. J. Brennan, P. H. Roberts and D. C. Mann, WaveProp a Wave Optics Simulation System for Use with MATLAB User’s Guide Version 1.3, the Optical Sciences Company, Anaheim, California (2010). Google Scholar

42. J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley and Sons, New York, New York (1978). Google Scholar

43. J. S. Tyo and A. S. Alenin, Field Guide to Linear Systems in Optics, SPIE Press, Bellingham, Washington (2015). Google Scholar

44. B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics, 2nd ed., John Wiley and Sons, New York, New York (2007). Google Scholar

45. E. L. Dereniak and G. D. Boreman, Infrared Detectors and Systems, John Wiley and Sons, New York, New York (1996). Google Scholar

Biography

Mark F. Spencer is a research physicist at the Air Force Research Laboratory, Directed Energy Directorate. He is also an assistant adjunct professor of optical sciences and engineering (OSE) at the Air Force Institute of Technology (AFIT), Department of Engineering Physics. He is a senior member of SPIE and received his BS in physics from the University of Redlands in 2008 and his MS and PhD in OSE from AFIT in 2011 and 2014, respectively.

Robert A. Raynor received his master’s degree in applied physics from the AFIT. He currently works as a research physicist at the Air Force Research Laboratory, Directed Energy Directorate. His research efforts concentrate on developing models for wavefront sensors that use digital-holographic detection and tracking sensors that use partially coherent illumination of optically rough targets.

Matthias T. Banet is an undergraduate student at the New Mexico Institute of Mining and Technology in Socorro, New Mexico. He is currently a summer intern at the Air Force Research Laboratory, Directed Energy Directorate. This fall, he will receive his BS degree in physics and minors in materials science and mathematics. Upon the completion of his undergraduate studies, he plans to pursue a PhD in optical sciences and engineering.

Dan K. Marker is currently in his 27th year of employment with the Air Force Research Laboratory, Directed Energy Directorate. His research efforts concentrate on the development of phased array imaging, phased array beam projection, optical quality polymer films, and tiled-array laser systems. He received his master’s degrees in mechanical engineering from the University of New Mexico and an MBA in finance from Webster University. He is currently the vice president of the Directed Energy Professional Society.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Mark F. Spencer, Mark F. Spencer, Robert A. Raynor, Robert A. Raynor, Matthias T. Banet, Matthias T. Banet, Dan K. Marker, Dan K. Marker, } "Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry," Optical Engineering 56(3), 031213 (31 October 2016). https://doi.org/10.1117/1.OE.56.3.031213 . Submission:
JOURNAL ARTICLE
13 PAGES


SHARE
Back to Top