Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry

Abstract. This paper develops wave-optics simulations which explore the estimation accuracy of digital-holographic detection for wavefront sensing in the presence of distributed-volume or “deep” turbulence and detection noise. Specifically, the analysis models spherical-wave propagation through varying deep-turbulence conditions along a horizontal propagation path and formulates the field-estimated Strehl ratio as a function of the diffraction-limited sampling quotient and signal-to-noise ratio. Such results will allow the reader to assess the number of pixels, pixel field of view, pixel-well depth, and read-noise standard deviation needed from a focal-plane array when using digital-holographic detection in the off-axis image plane recording geometry for deep-turbulence wavefront sensing.


Introduction
Digital-holographic detection shows distinct potential for applications that involve wavefront sensing in the presence of deep turbulence. As shown in Fig. 1, the use of digital-holographic detection in the off-axis image plane recording geometry (IPRG) provides access to an estimate of the amplitude and wrapped phase (i.e., the complex field) that exist in the exit-pupil plane of the imaging system. From the complexfield estimate, we can then pursue a multitude of applications such as atmospheric characterization, 1 free-space laser communications, 2 and adaptive-optics phase compensation. 3 The published literature often makes use of digitalholographic detection in the off-axis pupil plane or onaxis phase shifting recording geometries; 4 however, in terms of simplicity, the off-axis IPRG shown in Fig. 1 offers a nice combination of functionality. 5 For instance, when considering digital-holographic detection for applications that involve deep-turbulence wavefront sensing, the off-axis IPRG allows for the following multifunction capabilities.
• Incoherent imaging through passive illumination of an object. • Coherent imaging through active illumination of an object. From a beam-control stand point, 6 the multifunction capabilities listed above allow for a robust user interface which is not limited to wavefront sensing in the presence of an unresolved cooperative object (cf. Fig. 1). In practice, digitalholographic detection allows for the estimation of the complex field in the presence of an extended noncooperative object via speckle averaging and image sharpening algorithms or the angular diversity created by using multiple transmitters and receivers. [7][8][9][10][11][12][13][14][15][16][17][18] This versatility allows for long-range imaging, 19 three-dimensional imaging, 20 laser radar, 21 and synthetic-aperture imaging. 22 In general, the applications are abundant. 23,24 With wavefront-sensing applications in mind, the presence of deep turbulence tends to be the "Achilles' heel" to modern-day solutions [e.g., the Shack-Hartmann wavefront sensor (WFS), 25 which provides access to localized wavefront slope estimates]. This is said because coherentlight propagation through deep turbulence causes scintillation, which manifests as time-varying constructive and destructive interference between the object and receiver planes. The log-amplitude variance, which is also referred to as the Rytov number, gives a measure for the strength of the scintillation experienced by the coherent light. As the logamplitude variance grows above ∼0.25 (for a spherical wave), total-destructive interference gives rise to branch points in both the coherent light transmitted to the object and the coherent light received from the object. These branch points add a rotational component to the phase function that traditional-least-squares phase reconstruction algorithms cannot account for within the analysis. As such, the rotational component is often referred to as the "hidden phase" due to the foundational work of Fried. 26 In converting local wavefront slope estimates into unwrapped phase, the hidden phase gets mapped to the null space of traditional-least-squares phase reconstruction calculations. In turn, the unwrapped phase (i.e., the irrotational component) does not contain the branch points and associated branch cuts, which are unavoidable 2π phase discontinuities within the phase function. 27 Note that branch-pointtolerant phase reconstruction algorithms do exist within the published literature; [28][29][30][31] however, the performance of these algorithms needs to be quantified in hardware. 32 In addition to causing scintillation, the horizontal, lowaltitude, and long-range propagation paths that are reminiscent of deep-turbulence conditions can also lead to increased extinction. This outcome results in reduced transmittance due to molecular and aerosol absorption and scattering all along the propagation path. 33,34 In turn, we can concisely say that scintillation and extinction simply lead to low signal-tonoise ratios (SNRs) when performing deep-turbulence wavefront sensing. This is said because scintillation and extinction result in total-destructive interference and light-efficiency losses, respectively, over the field of view (FOV) of the WFS.
Provided enough signal, there are interferometric wavefront-sensing techniques that perform well in the presence of deep turbulence (e.g., the point-diffraction and self-referencing interferometers, 35,36 which create a reference by amplitude splitting and spatially filtering the received signal); however, in using these techniques, we cannot realistically approach a shot-noise-limited detection regime. In turn, digital-holographic detection offers a distinct way forward to combat the low SNRs caused by scintillation and extinction. In using digital-holographic detection, we can set the strength of the reference so that it boosts the signal above the read-noise floor of the FPA. This paper explores the estimation accuracy of digitalholographic detection in the off-axis IPRG for wavefront sensing in the presence of deep turbulence and detection noise. As shown in Fig. 1, the analysis uses an ideal point-source beacon in the object plane to represent the active illumination of an unresolved cooperative object. The resulting spherical wave propagates along a horizontal propagation path through the deep-turbulence conditions that are of interest in this paper. In what follows, Sec. 2 reviews the setup and exploration of the problem space described above in Fig. 1. Section 3 then provides results with discussion, and Sec. 4 concludes this paper. Before moving on to the next section, it is important to note that a lot of the simulation framework used in this paper originates from an earlier conference paper by Spencer et al. 37 It is our belief that this paper greatly extends upon the work contained in Ref. 37 by including the deleterious effects of detection noise within the analysis.

Setup and Exploration
This section discusses the setup and exploration needed for a series of computational wave-optics experiments which identify the performance of digital-holographic detection in the off-axis IPRG for wavefront sensing in the presence of deep turbulence and detection noise. The analysis uses many of the principles taught by Schmidt and Voelz in relatively recent SPIE Press publications. 38,39 In addition, the analysis uses MATLAB ® with the help of AOTools and WaveProp. 40,41 The Optical Sciences Company (tOSC) created these robust MATLAB ® toolboxes specifically for wave-optics simulations of this nature.
As shown in Fig. 1, the goal for the following analysis is to model digital-holographic detection in the off-axis IPRG for the purposes of deep-turbulence wavefront sensing. With Fig. 1 in mind, we need to further define the experimental parameter space. To help orient the reader, Fig. 2 pictorially shows the various planes of interest within the analysis. Note that the entrance-pupil plane effectively collimates the propagated light from the object plane, whereas the exit-pupil plane effectively focuses the propagated light to form the image plane at focus. Fig. 1 A description of digital-holographic detection in the off-axis IPRG. Here, a highly coherent masteroscillator (MO) laser is split into two optical trains. The first optical train actively illuminates an unresolved cooperative object. Analogously, the second optical train creates an off-axis local oscillator (LO), so that tilted-spherical-wave illumination is incident on an FPA. The spherical-wave reflections from an unresolved cooperative object then back propagate through deep-turbulence conditions, and upon being imaged onto the FPA coherently interfere with the tilted-spherical-wave illumination from the off-axis LO. In turn, the recorded interference pattern on the FPA is known as a digital hologram, and upon taking a 2-D IFFT, we can obtain an estimate of the wrapped phase (and amplitude) that exists in the exit-pupil plane of the imaging system.

Model Setup and Exploration
Provided Fig. 2 and Appendix A, we can determine the 2-D Fourier transformation of the hologram photoelectron density D H ðx 2 ; y 2 Þ as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 6 3 ; 5 4 1D in units of photoelectrons (pe). This result is remarkably physical, as the sampling theorem dictates that a sampled function becomes periodic upon finding its spectrum. 42,43 Through 2-D convolution with the separable comb functions and the convolution-sifting property of the impulse function, the terms contained within square brackets in Eq. (1) are repeated at intervals of λf∕x s and λf∕y s along the x and y axes, respectively. Thus, the final 2-D convolution with the separable narrow sinc functions serves to smooth out these repeated terms, whereas the amplitude modulation with the separable broadened sinc functions serves to dampen out these repeated terms.
To help simplify the analysis to a case that we can easily simulate using N × N computational grids, let us assume that the FPA has adjacent square pixels, so that x s ¼ y s ¼ w x ¼ w y ¼ w p . In so doing, we can rewrite Eq. (1) in terms of the diffraction-limited sampling quotient Q I , where E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 6 3 ; 1 6 4 Physically, there are multiple ways to think about the relationship given in Eq. (2). One way is to say that the diffraction-limited sampling quotient Q I is a measure of the number of FPA pixels across the diffraction-limited half width of the incoherent point-spread function (PSF). Remember that for linear shift-invariant imaging systems, the incoherent PSF is the irradiance associated with an imaged point source [i.e., the squared magnitude of Eq. (25) in Appendix A]. 38 Another way to think about the diffraction-limited sampling quotient, Q I , is to say that it is a measure of the number of diffraction angles, λ∕D 1 , per pixel FOV, w p ∕f, assuming small angles. In turn, the relationship given in Eq.
(2) allows us to vary the sampling with the FPA pixels. Using Eq.
(2), we can rewrite Eq. (1) in terms of the diffraction-limited sampling quotient Q I , such that E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 3 2 6 ; 4 7 9D Eq. (37) in Appendix A], we can make use of the convolution-sifting property of the impulse function and neglect the final 2-D convolution in Eq. (3). Accordingly, for large N the smoothing becomes minimized; however, for small N the smoothing becomes more pronounced. Let us assume that x R ¼ y R ¼ Q I D 1 ∕4, so that the last two terms within the square brackets in Eq. (3) shift diagonally. When Q I ≥ 4, the last two terms no longer overlap with the first two terms which are centered on axis. Correspondingly, when 2 ≤ Q I < 4, the last two terms are still resolvable within the side length of the N × N computational grid but overlap with the first term. Provided that N is constant, this latter case allows for us to obtain more samples across the exit-pupil diameter D 1 , which in turn minimizes the smoothing caused by the final 2-D convolution in Eq. (3). If the amplitude of the reference is set to be well above the amplitude of the signal (i.e., jA R j ≫ jA S j), then this functional overlap becomes negligible-a fundamental result obtained in Ref. 37.
Provided Eq. (3), we must use a window function wðx 1 ; y 1 Þ to obtain an estimateÛ S ðx 1 ; y 1 Þ of the desired signal complex field U S ðx 1 ; y 1 Þ [cf. Fig. 2 and Eq. (26) in Appendix A]. Specifically, E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 6 3 ; 6 8 6Û In using Eq. (4), we must satisfy Nyquist sampling with the FPA pixels, 42 so that the repeated terms within Eq. (3) do not overlap and cause significant aliasing. As such, the Nyquist rate is Q I D 1 ¼ λf∕w p and the Nyquist interval is 1∕Q ; t e m p : i n t r a l i n k -; e 0 0 5 ; 6 3 ; 5 7 6 wðx 1 ; Eq. (4) simplifies, such that E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 6 3 ; 4 9 9Û S ðx 1 ; y 1 Þ ≈ U S ðx 1 ; y 1 Þ: In turn, there is a distinct trade space found in using Eq. (3). We will explore this trade space in the presence of deep turbulence and detection noise in the analysis to come. Before moving on to the simulation setup and exploration, it is informative to develop a closed-form expression for the analytical SNR. For this purpose, we can approximate the estimated signal powerP S as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 6 3 ; 3 8 9P S ≈m RmS ; where E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 6 3 ; 3 4 5m is the mean number of reference photoelectrons detected per pixel and E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 6 3 ; 2 8 7m is the mean number of signal photoelectrons detected per pixel. Now we need to account for the estimated noise powerP N . Pixel to pixel, the FPA creates photoelectrons via statistically independent (i.e., delta correlated) and zero-mean random processes, so that the variance σ 2 is equivalent to the noise power. Here, E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 6 3 ; 1 7 4 σ 2 ¼m S þm R þm B þ σ 2 C ; wherem B is the mean number of photoelectrons associated with the background illumination (e.g., from passive illumination from the sun) and σ 2 C is the variance associated with pixel read noise (i.e., the FPA circuitry). In writing Eq. (10), note that we assume the use of a Poisson-distributed random process for the various sources of illumination that are incident on the FPA. In so doing, the mean number of photoelectrons is equal to the variance of the photoelectrons. 44,45 Also note that we assume the use of a Gaussian-distributed random process for the various sources of pixel read noise in the FPA.
Provided Eq. (10), the estimated noise powerP N follows from the noise variance σ 2 as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 3 2 6 ; 6 8 6P where E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 3 2 6 ; 6 4 3 is the ratio of the area associated with the window function wðx 1 ; y 1 Þ to the area associated with the side length Q I D 1 ¼ λf∕w p of the N × N computational grid in the Fourier plane. The analytical SNR then follows from Eqs.
We will validate the use of this closed-form expression in the simulation setup and exploration to follow.

Simulation Setup and Exploration
For all of the computational wave-optics experiments presented throughout this paper, we used N × N computational grids. For example, to simulate the propagation of an ideal point-source beacon though deep-turbulence conditions, we used 4096 × 4096 grid points and the split-step beam propagation method (BPM). [38][39][40][41] WaveProp and AOTools made use of a very narrow sinc function with a raised-cosine envelope to simulate an ideal point-source beacon. The sampling of this function and the object-plane side length was automatically set, so that after propagation from the object plane to the entrance-pupil plane, the illuminated region of interest was half the user-defined, entrance-pupil plane side length (cf. Fig. 2). Put another way, the simulations satisfied Fresnel scaling [i.e., N ¼ S 1 S 2 ∕ðλZÞ, where S 1 ¼ 16D 1 and S 2 are the object and entrance-pupil side lengths, respectively]. Altogether, this provided an entrancepupil plane side length of D 1 after cropping out the center 256 × 256 grid points. As mentioned previously, using ideal thin lenses the entrance-pupil plane effectively collimated the propagated light from the object plane, whereas the exitpupil plane effectively focused the propagated light to form the image plane at focus (cf. Fig. 2). As listed in Table 1, we used five different horizontal-path scenarios to create the deep-turbulence trade space of interest in this paper. Provided the index of refraction structure parameter C 2 n , we determined the log-amplitude variances for a plane wave, σ 2 χ−pw , and a spherical wave, σ 2 χ−sw , using the following equations: 34 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 4 ; 3 2 6 ; 1 5 7 and E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 5 ; 3 2 6 ; 1 1 3 where k ¼ 2π∕λ is again the angular wavenumber, λ ¼ 1 μm is the wavelength, and Z ¼ 7.5 km is the propagation distance (cf. Fig. 2). In addition, we determined the coherence diameters for a plane wave, r 0−pw , and a spherical wave, r 0−sw , using the following equations: 34 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 6 ; 6 3 ; 5 2 9 r 0−pw ¼ 0.185 and E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 7 ; 6 3 ; 4 7 3 r 0−sw ¼ 0.33 Based on Eqs. (14)- (17), the computational wave-optics experiments used 10 phase screens with equal spacing to simulate the propagation of an ideal point-source beacon through deep-turbulence conditions using the BPM. This choice provided low percentage errors (less than 0.5%) between the continuous and discrete calculations using Eqs. (14)- (17). 38 Propagation to the image plane from the exit-pupil plane occurred via a three-step process using WaveProp and AOTools: (1) by doubling the number of N × N grid points in the exit-pupil plane with a side length of D 1 from 256 × 256 grid points to 512 × 512 grid points via zero padding; (2) numerically solving the convolution form of the Fresnel diffraction integral via 2-D FFTs; and (3) cropping out the center 256 × 256 grid points, so that f ¼ Q I D 2 1 ∕ð256λÞ (i.e., the image plane side length was equal to the exit-pupil plane side length). As shown in Fig. 3, by varying the diffractionlimited sampling quotient, Q I , the number of FPA pixels across the diffraction-limited imaging bucket, D 2 , also varied. Here, For all of the computational wave-optics experiments presented in this paper (including those contained in Fig. 3), we set the pixel read-noise standard deviation to 100 pe and the pixel well depth to 100 × 10 3 pe. To simulate different SNRs [cf. Eq. (13)], we neglected to include background-illumination effects, and we set the amplitude of the reference jA R j to produce a mean number of reference photoelectrons detected per pixel equal to 25% of the pixel well depth (i.e.,m B ¼ 0 andm R ¼ 25 × 10 3 pe) [cf. Eq. (8)]. We then scaled the amplitude of the signal jA S j to have the appropriate mean number of signal photoelectronsm S detected per pixel [cf. Eq. (9)]. As such, the standard deviation of the shot noise varied within the simulations and was the dominate source of detection noise.
Remember that in the IPRG (cf. Figs. 1 and 2), digitalholographic detection provides access to an estimate of the amplitude and wrapped phase (i.e., the complex field) that exist in the exit-pupil plane of the imaging system. We obtained access to this complex-field estimate using the following steps: (1) where FOV ¼ 64λ∕D 1 . This choice ensured that we had the same number of pixels and effective detection noise across our complex-field estimates despite the fact that we varied the diffraction-limited sampling quotient Q I within the computational wave-optics experiments. Here again, f ¼ Q I D 2 1 ∕ð256λÞ was the focal length and Q 1 D 1 ¼ λf∕w p was the side length.
To generate results for the entire deep-turbulence trade space (cf. Table 1), we used the field-estimated Strehl ratio S F , such that E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 9 ; 3 2 6 ; 2 2 5 where U S ðx 1 ; y 1 Þ andÛ S ðx 1 ; y 1 Þ are the "truth" and "estimated" signal complex fields, respectively, and h∘i denotes mean. This performance metric bears some resemblance to a Strehl ratio, which in practice provides a normalized measure for performance. In Eq.  Table 1 The deep-turbulence trade space of interest in this paper. Remember that the log-amplitude variance σ 2 χ , which is also referred to as the Rytov number, gives a measure for the strength of the scintillation. As the σ 2 χ grows above ∼0.25 (for a spherical wave), scintillation gives rise to branch points in the phase function. Also remember that the coherence diameter r 0 , which is also referred to as the Fried parameter, gives a measure for the achievable imaging resolution. As the ratio of exit-pupil diameter D 1 to r 0 grows above ∼4 (for a spherical wave), higherorder aberrations beyond tilt start to limit the achievable imaging resolution. Here, D 1 ¼ 30 cm.
Here, we have made use of the fact that the mean of a pupil-plane quantity is equivalent to the on-axis DC term of the 2-D Fourier transformation of that pupil-plane quantity.

Shown in Figs. 4(a) and 4(b) is the wrapped phase and in Figs. 4(c) and 4(d) the normalized amplitude in the
Fourier plane for one independent realization of scenario 5 in Table 1 and detection noise. In Fig. 4, one can identify the complex-field estimate within the white circles of diameter D 1 . Specifically, as the diffraction limited sampling quotient Q I increases, so does the side length of the Fourier plane; however, the exit-pupil diameter D 1 remains constant. By windowing the data found within the white circles in Fig. 4, we obtained the results shown in Fig. 5. Here, we see that as the diffraction limited sampling quotient, Q I , increases, the field-estimated Strehl ratio, S F , decreases.
To determine the numerical SNR presented in Figs. 4 and 5, we performed the following steps using the numerical data found in Figs. 4(b) and 4(d) corresponding to a diffraction limited sampling quotient of Q I ¼ 4.
• Using the numerical data contained in the bottom-right circle, we computed the mean of the squared magnitude of the complex-field estimate to numerically determine the estimated signal power plus the noise powerP 0 SþN [cf. Eqs. (7) and (11)]. • Next, using the numerical data contained in the bottom-left circle, we computed the mean of the squared magnitude of the detection noise to numerically determine the estimated noise powerP 0 N . • Subtracting the first calculation from the second, we numerically determined the estimated signal power, so thatP 0 S ¼P 0 SþN −P 0 N . • The numerically determined SNR then followed as SNR 0 ¼P 0 S ∕P 0 N .
We also used these steps to validate the use of the closedform expression contained in Eq. (13). For this purpose, Fig. 6 presents percentage error results as a function of the analytical SNR. In Fig. 6, we averaged the results obtained from 20 independent realizations of scenarios 1 and 5 in Table 1 and 20 independent realizations of detection noise. Note that the error bars depict the width of the standard deviation. Also note that we only used numerical data corresponding to a diffraction limited sampling quotient of Fig. 3 (a, b) The normalized signal (c, d) and normalized digital hologram, in the image plane for a constant SNR, where the analytical SNR is 20. As the diffraction-limited sampling quotient, Q I , increases, the number of FPA pixels contained within the diffraction-limited imaging diameter, D 2 (white circles), increases proportionally. Note that the results presented here contain no aberrations. Q I ¼ 4, so that there was no functional overlap contained within the results [cf. Eq. (3)].
The analysis used multiple image-processing tricks to obtain the results presented in Figs. 3-6. With that said, the first image-processing trick was to subtract the mean from the recorded digital hologram. This removed the onaxis DC term from the numerical data contained in the Fourier plane. Next, the analysis applied a raised cosine window to the zero-mean digital hologram with eight-pixel-wide tapers at the edges of the FPA. This combined with zero-padding helped to mitigate the effects of aliasing from using N × N computational grids and 2-D IFFTs. [38][39][40][41] In practice, the analysis zero-padded the windowed zero-mean digital hologram to ensure that the complex-field estimate in the Fourier plane contained 256 × 256 grid points within the exit-pupil diameter D 1 . This outcome provided the same number of grid points as the exit-pupil plane for the sake of computing the field-estimated Strehl ratio S F with the "truth" complex field [cf. Eq. (19)]. Note that these image-processing tricks also apply to the results presented in the next section. Figure 7 shows field-estimated Strehl ratio S F results as a function of the diffraction-limited sampling quotient Q I . Here, we averaged the results obtained from 20 independent realizations of scenarios 1 to 5 in Table 1 and 20 independent realizations of detection noise. In Fig. 7, the error bars depict the width of the standard deviation. With this in mind, the analytical SNR increases from 1 in Fig. 7(a) to 10, 20, and 100 in Fig. 7(b), 7(c), and 7(d), respectively [cf. Eq. (13)]. Note that as the analytical SNR increases, the performance trends flip flop. This outcome is due to functional overlap introducing additional shot noise into the complexfield estimate when 2 ≤ Q I < 4. As Q I increases, this functional overlap decreases and the additional shot noise plays less of a role depending on the amount of smoothing [cf. Eq. (3)].

Results
The results shown in Fig. 7 do not agree with the results presented in Ref. 37. This is said because the performance trends are opposite of those found in Ref. 37, particularly for high SNRs. Regardless of the strength of the aberrations, Fig. 4 (a, b) The wrapped phase and (c, d) normalized amplitudes associated with the Fourier plane for a constant SNR, where the analytical SNR is 20 and the numerical SNR is 21.5. In general, the Fourier plane contains the complex-field estimate (i.e., an estimate of the amplitude and wrapped phase that exists in the exit-pupil plane of the imaging system). The results show that as the diffraction-limited sampling quotient, Q I , increases, the complex-field estimates contained within an exit-pupil diameter, D 1 (white circles), take up less and less space within the Fourier plane because the side length of the Fourier plane, Q I D 1 , increases proportionally.

Optical Engineering
Ref. 37 showed that for a constant number of pixels N across the FPA, the average S F values are always greatest given Q I ¼ 2. In general, lower Q I 's provide more samples across the complex-field estimate, which in turn minimizes the smoothing caused by the final 2-D convolution in Eq. (3). The results presented in Ref. 37, however, did not include the deleterious effects of detection noise.
In the presence of detection noise, lower Q I 's also increase the detection-noise sampling, which in turn degrades the complex-field estimate. To combat this effect, we chose to vary the number of pixels N across the FPA to keep the total FOV constant [cf. Eq. (18)]. With respect to Fig. 7, this choice decreases the amount of detection-noise sampling for lower Q I 's but increases the amount of smoothing caused by the final 2-D convolution in Eq. (3).
Remember that if the amplitude of the reference is set to be well above the amplitude of the signal (i.e., jA R j ≫ jA S j), then the functional overlap in Eq. (3) becomes negligible when 2 ≤ Q I < 4. With that said, Ref. 37 set the amplitude of the reference to be 10 times that of the signal (i.e., jA R j 2 ¼ 100 W∕m 2 and jA S j 2 ¼ 1 W∕m 2 ) [cf. Eqs. (8) and (9)]. Radiometrically speaking, both of these values are impractical Fig. 6 The average percentage error as a function of the analytical SNR for the deep-turbulence trade space presented in Table 1.
Here, the results show that as the analytical SNR increases, the average percentage error decreases between the numerical and analytical SNRs. Note that the error bars depict the width of the standard deviation for 400 realizations.  Fig. 4), we obtain the complex-field estimate (i.e., an estimate of the amplitude and wrapped phase that exists in the exit-pupil plane of the imaging system). The results contained in (a-d) show that as the diffraction-limited sampling quotient, Q I , increases, the field-estimated Strehl ratio, S F , decreases ever so slightly.
given the capabilities of modern-day, high-framerate, and short-wave-infrared (SWIR) FPAs. As such, the results presented in Fig. 7 tell the true story and the results presented in Ref. 37 tell the story given infinite SNR. Note that we would extend our results out to those obtained in Ref. 37; however, given the parameters of our FPA, we empirically determined that pixel saturation nominally occurs for analytical SNRs greater than 250 [cf. Eq. (13)]. This outcome occurs because of deep-turbulence scintillation (i.e., hotspots due to constructive interference).
The results presented in Fig. 7 ultimately show less than 5% variation in the S F values for the different Q I values within each plot. In terms of efficiently using the FPA pixels, the reader might conclude that there are distinct benefits to operating at lower Q I 's despite the minor (∼5%) performance penalty at high SNRs. Before moving on to the next section, it is important to note that provided different FPA parameters, such as a larger pixel well depth, the results presented in Fig. 7 might change; however, the parameters chosen for our FPA are indicative of modern-day, high-framerate, and SWIR FPAs.

Conclusion
The results presented in this paper serve two purposes. The first purpose is to validate the setup and exploration presented in Sec. 2. In turn, the second purpose is to allow the reader to assess the number of pixels, pixel FOV, pixelwell depth, and read-noise standard deviation needed from an FPA when using digital-holographic detection in the off-axis IPRG for deep-turbulence wavefront sensing.
Digital-holographic detection, in general, offers a distinct way forward to combat the low SNRs caused by scintillation and extinction, and it is our belief that the analysis presented throughout this paper shows that this statement is true. In Fig. 7 The average field-estimated Strehl ratio, S F , as a function of the diffraction-limited sampling quotient, Q I , for the deep-turbulence trade space presented in Table 1. Here, the analytical SNR increases from 1 in (a) to 10, 20, and 100 in (b-d), respectively. The results contained in (a) and (b) show that as the diffraction-limited sampling quotient, Q I , increases, the average field-estimated Strehl ratio, S F , decreases (i.e., for low SNRs, lower Q I 's perform better). In contrast, the results contained in (c) and (d) show that as the diffraction-limited sampling quotient, Q I , increases, the average field-estimated Strehl ratio, S F , increases (i.e., for high SNRs, higher Q I 's perform better). Note that the error bars depict the width of the standard deviation for 400 realizations.
using digital-holographic detection, we can set the strength of the reference so that it boosts the signal above the readnoise floor of the FPA. As such, we can approach a shotnoise-limited detection regime. This last statement is of course dependent on the parameters of the FPA, such as the pixel well depth. Nevertheless, given that scintillation and extinction lead to low SNRs, it is important that we reach the shot-noise limit in order to better perform deep-turbulence wavefront sensing. This outcome will allow future research efforts to better explore the associated branchpoint problem.

Appendix A
Using the convolution form of the Fresnel diffraction integral (cf. Fig. 2), we can represent the signal complex field U S ðx 2 ; y 2 Þ incident on the FPA as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 1 ; 6 3 ; 5 5 1 where U þ S ðx 1 ; y 1 Þ is the signal complex field leaving the exitpupil plane. Specifically, E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 2 ; 6 3 ; 4 5 where U − S ðx 1 ; y 1 Þ is the signal complex field incident on the exit-pupil plane, and E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 3 ; 6 3 ; 4 0 1 T P ðx 1 ; y 1 Þ ¼ cyl is the complex transmittance function of the exit-pupil plane (i.e., a circular aperture placed against a thin lens). In Eq. (23), E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 4 ; 6 3 ; 3 2 1 is a cylinder function where ρ 1 ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi , D 1 is the exitpupil diameter, k ¼ 2π∕λ is the angular wavenumber, λ is the wavelength, and f is the focal length. Substituting Eq. (22) into Eq. (21) we arrive at the following result: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 5 ; 6 3 ; 2 1 3 where E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 6 ; 6 3 ; 1 3 7 U S ðx 1 ; is the signal complex field that exists in the exit-pupil plane of the imaging system (cf. Fig. 2), and F f∘g v x ;v y denotes a 2-D Fourier transformation, such that E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 7 ; 3 2 6 ; 7 5 2Ṽ A 2-D inverse Fourier transformation then follows as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 8 ; 3 2 6 ; 6 8 5 With Fig. 2 in mind, we can also represent the reference complex field U R ðx 2 ; y 2 Þ incident on the FPA as resulting from the Fresnel approximation to a tilted spherical wave.
Here, E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 9 ; 3 2 6 ; 5 7 8 where A R is a complex constant and ðx R ; y R Þ are the coordinates of the off-axis local oscillator, which is located in the exit-pupil plane. Provided Eqs. (25)-(29), we can determine the hologram irradiance I H ðx 2 ; y 2 Þ incident on the FPA as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 0 ; 3 2 6 ; 4 2 9 I H ðx 2 ; y 2 Þ ¼ jU S ðx 2 ; y 2 Þ þ U R ðx 2 ; y 2 Þj 2 (30) in units of Watts per square meter ðW∕m 2 Þ. For all intents and purposes, the FPA will convert the hologram irradiance I H ðx 2 ; y 2 Þ, which is in an analog form, into a form that is suitable for digital image processing. Following the approach taken by Gaskill, 42 let us assume that "digitization" is to take place at sampling intervals of x s and y s , which are the x-and y-axes pixel pitches of the FPA (cf. Fig. 2). At any particular pixel, we can then estimate the hologram irradiance I H ðx 2 ; y 2 Þ by computing its average value over the active area of a pixel, which is centered at x 2 ¼ nx s and y 2 ¼ my s , where n ¼ 1 to N and m ¼ 1 to M. Specifically, E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 1 ; 3 2 6 ; 2 7 1Î where w x and w y are, respectively, the x-and y-axes pixel widths of the FPA, and Here, η is the quantum efficiency of the FPA, T is the integration time of the FPA, hν is the quantized photon energy, and the quantity, w x w y , is the active area of a pixel. Over the entire FPA, it then follows that the hologram photoelectron density D H ðx 2 ; y 2 Þ, in units of photoelectrons per square meter ðpe∕m 2 Þ, is simply a sampled version of the analog form of Eq. (33). This declaration leads to the following expressions: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 4 ; 6 3 ; 5 9 5 where E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 5 ; 6 3 ; 5 0 5m is the analog form of Eq. (33), E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 6 ; 6 3 ; 3 3 0 is a scaled comb function, E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 7 ; 6 3 ; 2 6 5 δðx − x 0 Þ ¼ lim is an impulse function, 43 where x 0 and y 0 are dummy variables of integration. From Eqs. (30)- (38), we can gain access to an estimate of the signal complex field U S ðx 1 ; y 1 Þ that exists in the exit-pupil plane of the imaging system [cf. Fig. 2 and Eq. (26)]. First, we let x 2 ¼ λfν x and y 2 ¼ λfν y and apply a 2-D inverse Fourier transformation to Eq. (34), such that E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 3 9 ; 3 2 6 ; 7 3 0 F −1 fD H ðλfν x ; λfν y Þg x 1 ;y 1 ¼ 1 where sincðxÞ ¼ sinðπxÞ∕ðπxÞ is a sinc function. Taking a look at the remaining 2-D inverse Fourier transformation in Eq. (39), we obtain the following relationship: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 4 0 ; 3 2 6 ; 5 4 1 F −1 fĨ H ðλfν x ; λfν y Þg x 1 ;y 1 ¼ F −1 fjU S ðλfν x ; λfν y Þj 2 g x 1 ;y 1 þ F −1 fjU R ðλfν x ; λfν y Þj 2 g x 1 ;y 1 þ F −1 fU S ðλfν x ; λfν y ÞU Ã R ðλfν x ; λfν y Þg x 1 ;y 1 þ F −1 fU R ðλfν x ; λfν y ÞU Ã S ðλfν x ; λfν y Þg x 1 ;y 1 ; where the superscript * denotes complex conjugate. From Eqs. (25) and (29), it then follows that E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 4 1 ; 3 2 6 ; 4 3 0 F −1 fĨ H ðλfν x ; λfν y Þg x 1 ;y 1 ¼ 1 The first term in Eq. (41) is nothing more than a scaled 2-D autocorrelation of the desired signal complex field U S ðx 1 ; y 1 Þ. This term is centered on axis and is physically twice the circumference of the exit-pupil diameter D 1 . The second term in Eq. (41) is also centered on axis and contains separable impulse functions [cf. Eq. (37)]. These impulse functions are at the strength of the uniform irradiance associated with the reference (i.e., jA R j 2 ). The last two terms in Eq. (41) form complex conjugate pairs and contain the desired signal complex field U S ðx 1 ; y 1 Þ, both scaled and shifted off axis by the coordinates ðx R ; y R Þ. Substituting Eq. (41) into Eq. (39), we obtain the following result after rearranging the special functions: Robert A. Raynor received his master's degree in applied physics from the AFIT. He currently works as a research physicist at the Air Force Research Laboratory, Directed Energy Directorate. His research efforts concentrate on developing models for wavefront sensors that use digital-holographic detection and tracking sensors that use partially coherent illumination of optically rough targets.