## 1.

## Introduction

Digital-holographic detection shows distinct potential for applications that involve wavefront sensing in the presence of deep turbulence. As shown in Fig. 1, the use of digital-holographic detection in the off-axis image plane recording geometry (IPRG) provides access to an estimate of the amplitude and wrapped phase (i.e., the complex field) that exist in the exit-pupil plane of the imaging system. From the complex-field estimate, we can then pursue a multitude of applications such as atmospheric characterization,^{1} free-space laser communications,^{2} and adaptive-optics phase compensation.^{3}

The published literature often makes use of digital-holographic detection in the off-axis pupil plane or on-axis phase shifting recording geometries;^{4} however, in terms of simplicity, the off-axis IPRG shown in Fig. 1 offers a nice combination of functionality.^{5} For instance, when considering digital-holographic detection for applications that involve deep-turbulence wavefront sensing, the off-axis IPRG allows for the following multifunction capabilities.

• Incoherent imaging through passive illumination of an object.

• Coherent imaging through active illumination of an object.

• Digital-holographic detection through the interference of a signal with a reference.

• Estimation of the amplitude and wrapped phase via a two-dimensional (2-D) inverse fast Fourier transform (IFFT) of the hologram irradiance recorded on the focal-plane array (FPA).

From a beam-control stand point,^{6} the multifunction capabilities listed above allow for a robust user interface which is not limited to wavefront sensing in the presence of an unresolved cooperative object (cf. Fig. 1). In practice, digital-holographic detection allows for the estimation of the complex field in the presence of an extended noncooperative object via speckle averaging and image sharpening algorithms or the angular diversity created by using multiple transmitters and receivers.^{7}8.9.10.11.12.13.14.15.16.17.^{–}^{18} This versatility allows for long-range imaging,^{19} three-dimensional imaging,^{20} laser radar,^{21} and synthetic-aperture imaging.^{22} In general, the applications are abundant.^{23}^{,}^{24}

With wavefront-sensing applications in mind, the presence of deep turbulence tends to be the “Achilles’ heel” to modern-day solutions [e.g., the Shack–Hartmann wavefront sensor (WFS),^{25} which provides access to localized wavefront slope estimates]. This is said because coherent-light propagation through deep turbulence causes scintillation, which manifests as time-varying constructive and destructive interference between the object and receiver planes. The log-amplitude variance, which is also referred to as the Rytov number, gives a measure for the strength of the scintillation experienced by the coherent light. As the log-amplitude variance grows above $\sim 0.25$ (for a spherical wave), total-destructive interference gives rise to branch points in both the coherent light transmitted to the object and the coherent light received from the object. These branch points add a rotational component to the phase function that traditional-least-squares phase reconstruction algorithms cannot account for within the analysis. As such, the rotational component is often referred to as the “hidden phase” due to the foundational work of Fried.^{26}

In converting local wavefront slope estimates into unwrapped phase, the hidden phase gets mapped to the null space of traditional-least-squares phase reconstruction calculations. In turn, the unwrapped phase (i.e., the irrotational component) does not contain the branch points and associated branch cuts, which are unavoidable $2\pi $ phase discontinuities within the phase function.^{27} Note that branch-point-tolerant phase reconstruction algorithms do exist within the published literature;^{28}29.30.^{–}^{31} however, the performance of these algorithms needs to be quantified in hardware.^{32}

In addition to causing scintillation, the horizontal, low-altitude, and long-range propagation paths that are reminiscent of deep-turbulence conditions can also lead to increased extinction. This outcome results in reduced transmittance due to molecular and aerosol absorption and scattering all along the propagation path.^{33}^{,}^{34} In turn, we can concisely say that scintillation and extinction simply lead to low signal-to-noise ratios (SNRs) when performing deep-turbulence wavefront sensing. This is said because scintillation and extinction result in total-destructive interference and light-efficiency losses, respectively, over the field of view (FOV) of the WFS.

Provided enough signal, there are interferometric wavefront-sensing techniques that perform well in the presence of deep turbulence (e.g., the point-diffraction and self-referencing interferometers,^{35}^{,}^{36} which create a reference by amplitude splitting and spatially filtering the received signal); however, in using these techniques, we cannot realistically approach a shot-noise-limited detection regime. In turn, digital-holographic detection offers a distinct way forward to combat the low SNRs caused by scintillation and extinction. In using digital-holographic detection, we can set the strength of the reference so that it boosts the signal above the read-noise floor of the FPA.

This paper explores the estimation accuracy of digital-holographic detection in the off-axis IPRG for wavefront sensing in the presence of deep turbulence and detection noise. As shown in Fig. 1, the analysis uses an ideal point-source beacon in the object plane to represent the active illumination of an unresolved cooperative object. The resulting spherical wave propagates along a horizontal propagation path through the deep-turbulence conditions that are of interest in this paper. In what follows, Sec. 2 reviews the setup and exploration of the problem space described above in Fig. 1. Section 3 then provides results with discussion, and Sec. 4 concludes this paper. Before moving on to the next section, it is important to note that a lot of the simulation framework used in this paper originates from an earlier conference paper by Spencer et al.^{37} It is our belief that this paper greatly extends upon the work contained in Ref. 37 by including the deleterious effects of detection noise within the analysis.

## 2.

## Setup and Exploration

This section discusses the setup and exploration needed for a series of computational wave-optics experiments which identify the performance of digital-holographic detection in the off-axis IPRG for wavefront sensing in the presence of deep turbulence and detection noise. The analysis uses many of the principles taught by Schmidt and Voelz in relatively recent SPIE Press publications.^{38}^{,}^{39} In addition, the analysis uses MATLAB^{®} with the help of AOTools and WaveProp.^{40}^{,}^{41} The Optical Sciences Company (tOSC) created these robust MATLAB^{®} toolboxes specifically for wave-optics simulations of this nature.

As shown in Fig. 1, the goal for the following analysis is to model digital-holographic detection in the off-axis IPRG for the purposes of deep-turbulence wavefront sensing. With Fig. 1 in mind, we need to further define the experimental parameter space. To help orient the reader, Fig. 2 pictorially shows the various planes of interest within the analysis. Note that the entrance-pupil plane effectively collimates the propagated light from the object plane, whereas the exit-pupil plane effectively focuses the propagated light to form the image plane at focus.

## 2.1.

### Model Setup and Exploration

Provided Fig. 2 and Appendix A, we can determine the 2-D Fourier transformation of the hologram photoelectron density ${D}_{H}({x}_{2},{y}_{2})$ as

## (1)

$${\tilde{D}}_{H}(\frac{-{x}_{1}}{\lambda f},\frac{-{y}_{1}}{\lambda f})=\frac{\eta T}{h\nu}{w}_{x}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{w}_{x}}{\lambda f}{x}_{1}\right){w}_{y}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{w}_{y}}{\lambda f}{y}_{1}\right)\phantom{\rule{0ex}{0ex}}[\frac{1}{{\lambda}^{2}{f}^{2}}{U}_{S}({x}_{1},{y}_{1})*{U}_{S}^{*}(-{x}_{1},-{y}_{1})+{|{A}_{R}|}^{2}\delta ({x}_{1})\delta ({y}_{1})+\frac{{A}_{R}^{*}{\mathrm{e}}^{jkf}}{j\lambda f}{U}_{S}({x}_{1}-{x}_{R},{y}_{1}-{y}_{R})-\frac{{A}_{R}{\mathrm{e}}^{-jkf}}{j\lambda f}{U}_{S}^{*}({x}_{1}+{x}_{R},{y}_{1}+{y}_{R})]\phantom{\rule{0ex}{0ex}}*\mathrm{comb}\left(\frac{{x}_{s}}{\lambda f}{x}_{1}\right)\mathrm{comb}\left(\frac{{y}_{s}}{\lambda f}{y}_{1}\right)\phantom{\rule{0ex}{0ex}}*\frac{N{x}_{s}}{\lambda f}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{N{x}_{s}}{\lambda f}{x}_{1}\right)\frac{M{y}_{s}}{\lambda f}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{M{y}_{s}}{\lambda f}{y}_{1}\right),$$^{42}

^{,}

^{43}Through 2-D convolution with the separable comb functions and the convolution-sifting property of the impulse function, the terms contained within square brackets in Eq. (1) are repeated at intervals of $\lambda f/{x}_{s}$ and $\lambda f/{y}_{s}$ along the $x$ and $y$ axes, respectively. Thus, the final 2-D convolution with the separable narrow sinc functions serves to smooth out these repeated terms, whereas the amplitude modulation with the separable broadened sinc functions serves to dampen out these repeated terms.

To help simplify the analysis to a case that we can easily simulate using $N\times N$ computational grids, let us assume that the FPA has adjacent square pixels, so that ${x}_{s}={y}_{s}={w}_{x}={w}_{y}={w}_{p}$. In so doing, we can rewrite Eq. (1) in terms of the diffraction-limited sampling quotient ${Q}_{I}$, where

Physically, there are multiple ways to think about the relationship given in Eq. (2). One way is to say that the diffraction-limited sampling quotient ${Q}_{I}$ is a measure of the number of FPA pixels across the diffraction-limited half width of the incoherent point-spread function (PSF). Remember that for linear shift-invariant imaging systems, the incoherent PSF is the irradiance associated with an imaged point source [i.e., the squared magnitude of Eq. (25) in Appendix A].^{38}Another way to think about the diffraction-limited sampling quotient, ${Q}_{I}$, is to say that it is a measure of the number of diffraction angles, $\lambda /{D}_{1}$, per pixel FOV, ${w}_{p}/f$, assuming small angles. In turn, the relationship given in Eq. (2) allows us to vary the sampling with the FPA pixels.

Using Eq. (2), we can rewrite Eq. (1) in terms of the diffraction-limited sampling quotient ${Q}_{I}$, such that

## (3)

$${\tilde{D}}_{H}(\frac{-{x}_{1}}{\lambda f},\frac{-{y}_{1}}{\lambda f})=\frac{\eta T}{h\nu}{w}_{p}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{x}_{1}}{{Q}_{I}{D}_{1}}\right){w}_{p}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{y}_{1}}{{Q}_{I}{D}_{1}}\right)\phantom{\rule{0ex}{0ex}}[\frac{1}{{\lambda}^{2}{f}^{2}}{U}_{S}({x}_{1},{y}_{1})*{U}_{S}^{*}(-{x}_{1},-{y}_{1})+{|{A}_{R}|}^{2}\delta ({x}_{1})\delta ({y}_{1})\phantom{\rule{0ex}{0ex}}+\frac{{A}_{R}^{*}{\mathrm{e}}^{jkf}}{j\lambda f}{U}_{S}({x}_{1}-{x}_{R},{y}_{1}-{y}_{R})-\frac{{A}_{R}{\mathrm{e}}^{-jkf}}{j\lambda f}{U}_{S}^{*}({x}_{1}+{x}_{R},{y}_{1}+{y}_{R})]\phantom{\rule{0ex}{0ex}}*\mathrm{comb}\left(\frac{{x}_{1}}{{Q}_{I}{D}_{1}}\right)\mathrm{comb}\left(\frac{{y}_{1}}{{Q}_{I}{D}_{1}}\right)\phantom{\rule{0ex}{0ex}}*\frac{N}{{Q}_{I}{D}_{1}}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{x}_{1}}{{Q}_{I}{D}_{1}/N}\right)\frac{N}{{Q}_{I}{D}_{1}}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{y}_{1}}{{Q}_{I}{D}_{1}/N}\right).$$Provided Eq. (3), we must use a window function $w({x}_{1},{y}_{1})$ to obtain an estimate ${\widehat{U}}_{S}({x}_{1},{y}_{1})$ of the desired signal complex field ${U}_{S}({x}_{1},{y}_{1})$ [cf. Fig. 2 and Eq. (26) in Appendix A]. Specifically,

## (4)

$${\widehat{U}}_{S}({x}_{1},{y}_{1})=w({x}_{1},{y}_{1}){\tilde{D}}_{H}(\frac{-{x}_{1}}{\lambda f},\frac{-{y}_{1}}{\lambda f}).$$^{42}so that the repeated terms within Eq. (3) do not overlap and cause significant aliasing. As such, the Nyquist rate is ${Q}_{I}{D}_{1}=\lambda f/{w}_{p}$ and the Nyquist interval is $1/{Q}_{I}{D}_{1}={w}_{p}/\lambda f$ when ${x}_{R}={y}_{R}={Q}_{I}{D}_{1}/4$. Assuming that $N\to \infty $, ${Q}_{I}\ge 2$, $|{A}_{R}|\gg |{A}_{S}|$, and

## (5)

$$w({x}_{1},{y}_{1})=\frac{\mathrm{cyl}\left[\frac{\sqrt{{({x}_{1}-{x}_{R})}^{2}+{({y}_{1}-{y}_{R})}^{2}}}{{D}_{1}}\right]}{\frac{\eta T}{h\nu}\frac{{A}_{R}^{*}{\mathrm{e}}^{jkf}{w}_{p}^{2}}{j\lambda f}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{x}_{1}}{{Q}_{I}{D}_{1}}\right)\mathrm{sinc}\left(\frac{{y}_{1}}{{Q}_{I}{D}_{1}}\right)},$$Before moving on to the simulation setup and exploration, it is informative to develop a closed-form expression for the analytical SNR. For this purpose, we can approximate the estimated signal power ${\widehat{P}}_{S}$ as

where is the mean number of reference photoelectrons detected per pixel and is the mean number of signal photoelectrons detected per pixel. Now we need to account for the estimated noise power ${\widehat{P}}_{N}$.Pixel to pixel, the FPA creates photoelectrons via statistically independent (i.e., delta correlated) and zero-mean random processes, so that the variance ${\sigma}^{2}$ is equivalent to the noise power. Here,

where ${\stackrel{\u203e}{m}}_{B}$ is the mean number of photoelectrons associated with the background illumination (e.g., from passive illumination from the sun) and ${\sigma}_{C}^{2}$ is the variance associated with pixel read noise (i.e., the FPA circuitry). In writing Eq. (10), note that we assume the use of a Poisson-distributed random process for the various sources of illumination that are incident on the FPA. In so doing, the mean number of photoelectrons is equal to the variance of the photoelectrons.^{44}

^{,}

^{45}Also note that we assume the use of a Gaussian-distributed random process for the various sources of pixel read noise in the FPA.

Provided Eq. (10), the estimated noise power ${\widehat{P}}_{N}$ follows from the noise variance ${\sigma}^{2}$ as

where is the ratio of the area associated with the window function $w({x}_{1},{y}_{1})$ to the area associated with the side length ${Q}_{I}{D}_{1}=\lambda f/{w}_{p}$ of the $N\times N$ computational grid in the Fourier plane. The analytical SNR then follows from Eqs. (7)–(12) as We will validate the use of this closed-form expression in the simulation setup and exploration to follow.## 2.2.

### Simulation Setup and Exploration

For all of the computational wave-optics experiments presented throughout this paper, we used $N\times N$ computational grids. For example, to simulate the propagation of an ideal point-source beacon though deep-turbulence conditions, we used $4096\times 4096$ grid points and the split-step beam propagation method (BPM).^{38}39.40.^{–}^{41} WaveProp and AOTools made use of a very narrow sinc function with a raised-cosine envelope to simulate an ideal point-source beacon. The sampling of this function and the object-plane side length was automatically set, so that after propagation from the object plane to the entrance-pupil plane, the illuminated region of interest was half the user-defined, entrance-pupil plane side length (cf. Fig. 2). Put another way, the simulations satisfied Fresnel scaling [i.e., $N={S}_{1}{S}_{2}/(\lambda Z)$, where ${S}_{1}=16{D}_{1}$ and ${S}_{2}$ are the object and entrance-pupil side lengths, respectively]. Altogether, this provided an entrance-pupil plane side length of ${D}_{1}$ after cropping out the center $256\times 256$ grid points. As mentioned previously, using ideal thin lenses the entrance-pupil plane effectively collimated the propagated light from the object plane, whereas the exit-pupil plane effectively focused the propagated light to form the image plane at focus (cf. Fig. 2).

As listed in Table 1, we used five different horizontal-path scenarios to create the deep-turbulence trade space of interest in this paper. Provided the index of refraction structure parameter ${C}_{n}^{2}$, we determined the log-amplitude variances for a plane wave, ${\sigma}_{\chi -pw}^{2}$, and a spherical wave, ${\sigma}_{\chi -sw}^{2}$, using the following equations:^{34}

^{34}and Based on Eqs. (14)–(17), the computational wave-optics experiments used 10 phase screens with equal spacing to simulate the propagation of an ideal point-source beacon through deep-turbulence conditions using the BPM. This choice provided low percentage errors (less than 0.5%) between the continuous and discrete calculations using Eqs. (14)–(17).

^{38}

## Table 1

The deep-turbulence trade space of interest in this paper. Remember that the log-amplitude variance σχ2, which is also referred to as the Rytov number, gives a measure for the strength of the scintillation. As the σχ2 grows above ∼0.25 (for a spherical wave), scintillation gives rise to branch points in the phase function. Also remember that the coherence diameter r0, which is also referred to as the Fried parameter, gives a measure for the achievable imaging resolution. As the ratio of exit-pupil diameter D1 to r0 grows above ∼4 (for a spherical wave), higher-order aberrations beyond tilt start to limit the achievable imaging resolution. Here, D1=30 cm.

Scenario | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|

${C}_{n}^{2}\text{\hspace{0.17em}}{(\mathrm{m}}^{-2/3})$ | $1.00\times {10}^{-15}$ | $1.50\times {10}^{-15}$ | $2.00\times {10}^{-15}$ | $2.50\times {10}^{-15}$ | $3.00\times {10}^{-15}$ |

${\sigma}_{\chi -sw}^{2}$ | 0.135 | 0.202 | 0.270 | 0.337 | 0.404 |

${\sigma}_{\chi -pw}^{2}$ | 0.333 | 0.500 | 0.667 | 0.833 | 1.00 |

${r}_{0-sw}\text{\hspace{0.17em}}(\mathrm{cm})$ | 9.92 | 7.78 | 6.55 | 5.73 | 5.14 |

${r}_{0-pw}\text{\hspace{0.17em}}(\mathrm{cm})$ | 5.51 | 4.32 | 3.63 | 3.18 | 2.85 |

Propagation to the image plane from the exit-pupil plane occurred via a three-step process using WaveProp and AOTools: (1) by doubling the number of $N\times N$ grid points in the exit-pupil plane with a side length of ${D}_{1}$ from $256\times 256$ grid points to $512\times 512$ grid points via zero padding; (2) numerically solving the convolution form of the Fresnel diffraction integral via 2-D FFTs; and (3) cropping out the center $256\times 256$ grid points, so that $f={Q}_{I}{D}_{1}^{2}/(256\lambda )$ (i.e., the image plane side length was equal to the exit-pupil plane side length). As shown in Fig. 3, by varying the diffraction-limited sampling quotient, ${Q}_{I}$, the number of FPA pixels across the diffraction-limited imaging bucket, ${D}_{2}$, also varied. Here, ${D}_{2}=2.44\lambda f/{D}_{1}$ with ${D}_{1}=30\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{cm}$.

For all of the computational wave-optics experiments presented in this paper (including those contained in Fig. 3), we set the pixel read-noise standard deviation to 100 pe and the pixel well depth to $100\times {10}^{3}$ pe. To simulate different SNRs [cf. Eq. (13)], we neglected to include background-illumination effects, and we set the amplitude of the reference $|{A}_{R}|$ to produce a mean number of reference photoelectrons detected per pixel equal to 25% of the pixel well depth (i.e., ${\overline{m}}_{B}=0$ and ${\overline{m}}_{R}=25\times {10}^{3}$ pe) [cf. Eq. (8)]. We then scaled the amplitude of the signal $|{A}_{S}|$ to have the appropriate mean number of signal photoelectrons ${\overline{m}}_{S}$ detected per pixel [cf. Eq. (9)]. As such, the standard deviation of the shot noise varied within the simulations and was the dominate source of detection noise.

Remember that in the IPRG (cf. Figs. 1 and 2), digital-holographic detection provides access to an estimate of the amplitude and wrapped phase (i.e., the complex field) that exist in the exit-pupil plane of the imaging system. We obtained access to this complex-field estimate using the following steps: (1) within the image plane, interfering the signal with the reference [cf. Eq. (29) in Appendix A]; (2) recording the hologram irradiance on the FPA to create a digital hologram with Poisson-distributed shot noise and Gaussian-distributed pixel read noise; (3) taking the 2-D IFFT of the digital hologram to go to the Fourier plane; and (4) within the Fourier plane, windowing the off-axis complex-field estimate. To perform an apples-to-apples comparison, we kept the total FOV constant and varied the number of pixels $N$ across the FPA, such that

where $\mathrm{FOV}=64\lambda /{D}_{1}$. This choice ensured that we had the same number of pixels and effective detection noise across our complex-field estimates despite the fact that we varied the diffraction-limited sampling quotient ${Q}_{I}$ within the computational wave-optics experiments. Here again, $f={Q}_{I}{D}_{1}^{2}/(256\lambda )$ was the focal length and ${Q}_{1}{D}_{1}=\lambda f/{w}_{p}$ was the side length.To generate results for the entire deep-turbulence trade space (cf. Table 1), we used the field-estimated Strehl ratio ${S}_{F}$, such that

## (19)

$${S}_{F}=\frac{{|\u27e8{U}_{S}({x}_{1},{y}_{1}){\widehat{U}}_{S}^{*}({x}_{1},{y}_{1})\u27e9|}^{2}}{\u27e8{|{U}_{S}({x}_{1},{y}_{1})|}^{2}\u27e9\u27e8{|{\widehat{U}}_{S}({x}_{1},{y}_{1})|}^{2}\u27e9},$$^{40}

^{,}

^{41}

## (20)

$$S=\frac{{|\u27e8{U}_{S}({x}_{1},{y}_{1})\u27e9|}^{2}}{\u27e8{|{U}_{S}({x}_{1},{y}_{1})|}^{2}\u27e9}.$$Shown in Figs. 4(a) and 4(b) is the wrapped phase and in Figs. 4(c) and 4(d) the normalized amplitude in the Fourier plane for one independent realization of scenario 5 in Table 1 and detection noise. In Fig. 4, one can identify the complex-field estimate within the white circles of diameter ${D}_{1}$. Specifically, as the diffraction limited sampling quotient ${Q}_{I}$ increases, so does the side length of the Fourier plane; however, the exit-pupil diameter ${D}_{1}$ remains constant. By windowing the data found within the white circles in Fig. 4, we obtained the results shown in Fig. 5. Here, we see that as the diffraction limited sampling quotient, ${Q}_{I}$, increases, the field-estimated Strehl ratio, ${S}_{F}$, decreases.

To determine the numerical SNR presented in Figs. 4 and 5, we performed the following steps using the numerical data found in Figs. 4(b) and 4(d) corresponding to a diffraction limited sampling quotient of ${Q}_{I}=4$.

• Using the numerical data contained in the bottom-right circle, we computed the mean of the squared magnitude of the complex-field estimate to numerically determine the estimated signal power plus the noise power ${\widehat{P}}_{S+N}^{\prime}$ [cf. Eqs. (7) and (11)].

• Next, using the numerical data contained in the bottom-left circle, we computed the mean of the squared magnitude of the detection noise to numerically determine the estimated noise power ${\widehat{P}}_{N}^{\prime}$.

• Subtracting the first calculation from the second, we numerically determined the estimated signal power, so that ${\widehat{P}}_{S}^{\prime}={\widehat{P}}_{S+N}^{\prime}-{\widehat{P}}_{N}^{\prime}$.

• The numerically determined SNR then followed as ${\mathrm{SN}\mathrm{R}}^{\prime}={\widehat{P}}_{S}^{\prime}/{\widehat{P}}_{N}^{\prime}$.

We also used these steps to validate the use of the closed-form expression contained in Eq. (13). For this purpose, Fig. 6 presents percentage error results as a function of the analytical SNR. In Fig. 6, we averaged the results obtained from 20 independent realizations of scenarios 1 and 5 in Table 1 and 20 independent realizations of detection noise. Note that the error bars depict the width of the standard deviation. Also note that we only used numerical data corresponding to a diffraction limited sampling quotient of ${Q}_{I}=4$, so that there was no functional overlap contained within the results [cf. Eq. (3)].

The analysis used multiple image-processing tricks to obtain the results presented in Figs. 3–6. With that said, the first image-processing trick was to subtract the mean from the recorded digital hologram. This removed the on-axis DC term from the numerical data contained in the Fourier plane. Next, the analysis applied a raised cosine window to the zero-mean digital hologram with eight-pixel-wide tapers at the edges of the FPA. This combined with zero-padding helped to mitigate the effects of aliasing from using $N\times N$ computational grids and 2-D IFFTs.^{38}39.40.^{–}^{41} In practice, the analysis zero-padded the windowed zero-mean digital hologram to ensure that the complex-field estimate in the Fourier plane contained $256\times 256$ grid points within the exit-pupil diameter ${D}_{1}$. This outcome provided the same number of grid points as the exit-pupil plane for the sake of computing the field-estimated Strehl ratio ${S}_{F}$ with the “truth” complex field [cf. Eq. (19)]. Note that these image-processing tricks also apply to the results presented in the next section.

## 3.

## Results

Figure 7 shows field-estimated Strehl ratio ${S}_{F}$ results as a function of the diffraction-limited sampling quotient ${Q}_{I}$. Here, we averaged the results obtained from 20 independent realizations of scenarios 1 to 5 in Table 1 and 20 independent realizations of detection noise. In Fig. 7, the error bars depict the width of the standard deviation. With this in mind, the analytical SNR increases from 1 in Fig. 7(a) to 10, 20, and 100 in Fig. 7(b), 7(c), and 7(d), respectively [cf. Eq. (13)]. Note that as the analytical SNR increases, the performance trends flip flop. This outcome is due to functional overlap introducing additional shot noise into the complex-field estimate when $2\le {Q}_{I}<4$. As ${Q}_{I}$ increases, this functional overlap decreases and the additional shot noise plays less of a role depending on the amount of smoothing [cf. Eq. (3)].

The results shown in Fig. 7 do not agree with the results presented in Ref. 37. This is said because the performance trends are opposite of those found in Ref. 37, particularly for high SNRs. Regardless of the strength of the aberrations, Ref. 37 showed that for a constant number of pixels $N$ across the FPA, the average ${S}_{F}$ values are always greatest given ${Q}_{I}=2$. In general, lower ${Q}_{I}$’s provide more samples across the complex-field estimate, which in turn minimizes the smoothing caused by the final 2-D convolution in Eq. (3). The results presented in Ref. 37, however, did not include the deleterious effects of detection noise.

In the presence of detection noise, lower ${Q}_{I}$’s also increase the detection-noise sampling, which in turn degrades the complex-field estimate. To combat this effect, we chose to vary the number of pixels $N$ across the FPA to keep the total FOV constant [cf. Eq. (18)]. With respect to Fig. 7, this choice decreases the amount of detection-noise sampling for lower ${Q}_{I}$’s but increases the amount of smoothing caused by the final 2-D convolution in Eq. (3).

Remember that if the amplitude of the reference is set to be well above the amplitude of the signal (i.e., $|{A}_{R}|\gg |{A}_{S}|$), then the functional overlap in Eq. (3) becomes negligible when $2\le {Q}_{I}<4$. With that said, Ref. 37 set the amplitude of the reference to be 10 times that of the signal (i.e., ${|{A}_{R}|}^{2}=100\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}/{\mathrm{m}}^{2}$ and ${|{A}_{S}|}^{2}=1\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}/{\mathrm{m}}^{2}$) [cf. Eqs. (8) and (9)]. Radiometrically speaking, both of these values are impractical given the capabilities of modern-day, high-framerate, and short-wave-infrared (SWIR) FPAs. As such, the results presented in Fig. 7 tell the true story and the results presented in Ref. 37 tell the story given infinite SNR. Note that we would extend our results out to those obtained in Ref. 37; however, given the parameters of our FPA, we empirically determined that pixel saturation nominally occurs for analytical SNRs greater than 250 [cf. Eq. (13)]. This outcome occurs because of deep-turbulence scintillation (i.e., hotspots due to constructive interference).

The results presented in Fig. 7 ultimately show less than 5% variation in the ${S}_{F}$ values for the different ${Q}_{I}$ values within each plot. In terms of efficiently using the FPA pixels, the reader might conclude that there are distinct benefits to operating at lower ${Q}_{I}$’s despite the minor ($\sim 5\%$) performance penalty at high SNRs. Before moving on to the next section, it is important to note that provided different FPA parameters, such as a larger pixel well depth, the results presented in Fig. 7 might change; however, the parameters chosen for our FPA are indicative of modern-day, high-framerate, and SWIR FPAs.

## 4.

## Conclusion

The results presented in this paper serve two purposes. The first purpose is to validate the setup and exploration presented in Sec. 2. In turn, the second purpose is to allow the reader to assess the number of pixels, pixel FOV, pixel-well depth, and read-noise standard deviation needed from an FPA when using digital-holographic detection in the off-axis IPRG for deep-turbulence wavefront sensing.

Digital-holographic detection, in general, offers a distinct way forward to combat the low SNRs caused by scintillation and extinction, and it is our belief that the analysis presented throughout this paper shows that this statement is true. In using digital-holographic detection, we can set the strength of the reference so that it boosts the signal above the read-noise floor of the FPA. As such, we can approach a shot-noise-limited detection regime. This last statement is of course dependent on the parameters of the FPA, such as the pixel well depth. Nevertheless, given that scintillation and extinction lead to low SNRs, it is important that we reach the shot-noise limit in order to better perform deep-turbulence wavefront sensing. This outcome will allow future research efforts to better explore the associated branch-point problem.

## Appendices

## Appendix A

Using the convolution form of the Fresnel diffraction integral (cf. Fig. 2), we can represent the signal complex field ${U}_{S}({x}_{2},{y}_{2})$ incident on the FPA as

## (21)

$${U}_{S}({x}_{2},{y}_{2})=\frac{{\mathrm{e}}^{jkf}}{j\lambda f}{\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{U}_{S}^{+}({x}_{1},{y}_{1})\mathrm{exp}\{j\frac{k}{2f}[{({x}_{2}-{x}_{1})}^{2}+{({y}_{2}-{y}_{1})}^{2}]\}\mathrm{d}{x}_{1}\text{\hspace{0.17em}}\mathrm{d}{y}_{1},$$## (23)

$${T}_{P}({x}_{1},{y}_{1})=\mathrm{cyl}\left(\frac{\sqrt{{x}_{1}^{2}+{y}_{1}^{2}}}{{D}_{1}}\right)\mathrm{exp}[-j\frac{k}{2f}({x}_{1}^{2}+{y}_{1}^{2})]$$## (24)

$$\mathrm{cyl}({\rho}_{1})=\{\begin{array}{cc}\begin{array}{c}1\\ 0.5\\ 0\end{array}& \begin{array}{c}0\le {\rho}_{1}<0.5\\ {\rho}_{1}=0.5\\ {\rho}_{1}>0.5\end{array}\end{array}$$## (25)

$${U}_{S}({x}_{2},{y}_{2})=\frac{{\mathrm{e}}^{jkf}}{j\lambda f}\text{\hspace{0.17em}}\mathrm{exp}[j\frac{k}{2f}({x}_{2}^{2}+{y}_{2}^{2})]\mathcal{F}{\{{U}_{S}({x}_{1},{y}_{1})\}}_{{\nu}_{x}=\frac{{x}_{2}}{\lambda f},{\nu}_{y}=\frac{{y}_{2}}{\lambda f}},$$## (26)

$${U}_{S}({x}_{1},{y}_{1})={U}_{S}^{-}({x}_{1},{y}_{1})\mathrm{cyl}\left(\frac{\sqrt{{x}_{1}^{2}+{y}_{1}^{2}}}{{D}_{1}}\right)$$## (27)

$$\tilde{V}({\nu}_{x},{\nu}_{y})=\mathcal{F}{\{V(x,y)\}}_{{\nu}_{x},{\nu}_{y}}={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}V(x,y){\mathrm{e}}^{-j2\pi (x{\nu}_{x}+y{\nu}_{y})}\mathrm{d}x\text{\hspace{0.17em}}\mathrm{d}y.$$## (28)

$$V(x,y)={\mathcal{F}}^{-1}{\{\tilde{V}({\nu}_{x},{\nu}_{y})\}}_{x,y}={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}\tilde{V}({\nu}_{x},{\nu}_{y}){\mathrm{e}}^{j2\pi (x{\nu}_{x}+y{\nu}_{y})}\mathrm{d}{\nu}_{x}\text{\hspace{0.17em}}\mathrm{d}{\nu}_{y}.$$## (29)

$${U}_{R}({x}_{2},{y}_{2})={A}_{R}\text{\hspace{0.17em}}\mathrm{exp}[j\frac{k}{2f}({x}_{2}^{2}+{y}_{2}^{2})]\mathrm{exp}(-j2\pi {x}_{R}\frac{{x}_{2}}{\lambda f})\mathrm{exp}(-j2\pi {y}_{R}\frac{{y}_{2}}{\lambda f}),$$Provided Eqs. (25)–(29), we can determine the hologram irradiance ${I}_{H}({x}_{2},{y}_{2})$ incident on the FPA as

in units of Watts per square meter $(\mathrm{W}/{\mathrm{m}}^{2})$. For all intents and purposes, the FPA will convert the hologram irradiance ${I}_{H}({x}_{2},{y}_{2})$, which is in an analog form, into a form that is suitable for digital image processing. Following the approach taken by Gaskill,^{42}let us assume that “digitization” is to take place at sampling intervals of ${x}_{s}$ and ${y}_{s}$, which are the $x$- and $y$-axes pixel pitches of the FPA (cf. Fig. 2). At any particular pixel, we can then estimate the hologram irradiance ${I}_{H}({x}_{2},{y}_{2})$ by computing its average value over the active area of a pixel, which is centered at ${x}_{2}=n{x}_{s}$ and ${y}_{2}=m{y}_{s}$, where $n=1$ to $N$ and $m=1$ to $M$. Specifically,

## (31)

$${\widehat{I}}_{H}(n{x}_{s},m{y}_{s})={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{I}_{H}({x}_{2}^{\prime},{y}_{2}^{\prime})\frac{1}{{w}_{x}}\mathrm{rect}\left(\frac{{x}_{2}^{\prime}-n{x}_{s}}{{w}_{x}}\right)\frac{1}{{w}_{y}}\mathrm{rect}\left(\frac{{y}_{2}^{\prime}-m{y}_{s}}{{w}_{y}}\right)\mathrm{d}{x}_{2}^{\prime}\text{\hspace{0.17em}}\mathrm{d}{y}_{2}^{\prime},$$Neglecting the effects of pixel edge diffusion in the FPA,^{4} remember that the number of hologram photoelectrons ${m}_{H}(n{x}_{s},m{y}_{s})$, at any particular pixel and time interval, is a random process with mean,^{44}

## (33)

$${\overline{m}}_{H}(n{x}_{s},m{y}_{s})=\frac{\eta T}{h\nu}{\widehat{I}}_{H}(n{x}_{s},m{y}_{s}){w}_{x}{w}_{y}.$$## (34)

$${D}_{H}({x}_{2},{y}_{2})={\overline{m}}_{H}({x}_{2},{y}_{2})\frac{1}{{x}_{s}}\mathrm{comb}\left(\frac{{x}_{2}}{{x}_{s}}\right)\frac{1}{{y}_{s}}\mathrm{comb}\left(\frac{{y}_{2}}{{y}_{s}}\right)\mathrm{rect}\left(\frac{{x}_{2}}{N{x}_{s}}\right)\mathrm{rect}\left(\frac{{y}_{2}}{M{y}_{s}}\right),$$## (35)

$${\overline{m}}_{H}({x}_{2},{y}_{2})=\frac{\eta T}{h\nu}{\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{I}_{H}({x}_{2}^{\prime},{y}_{2}^{\prime})\mathrm{rect}\left(\frac{{x}_{2}^{\prime}-{x}_{2}}{{w}_{x}}\right)\mathrm{rect}\left(\frac{{y}_{2}^{\prime}-{y}_{2}}{{w}_{y}}\right)\mathrm{d}{x}_{2}^{\prime}\text{\hspace{0.17em}}\mathrm{d}{y}_{2}^{\prime}\phantom{\rule{0ex}{0ex}}=\frac{\eta T}{h\nu}{\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{I}_{H}({x}_{2}^{\prime},{y}_{2}^{\prime})\mathrm{rect}\left(\frac{{x}_{2}-{x}_{2}^{\prime}}{{w}_{x}}\right)\mathrm{rect}\left(\frac{{y}_{2}-{y}_{2}^{\prime}}{{w}_{y}}\right)\mathrm{d}{x}_{2}^{\prime}\text{\hspace{0.17em}}\mathrm{d}{y}_{2}^{\prime}\phantom{\rule{0ex}{0ex}}=\frac{\eta T}{h\nu}{I}_{H}({x}_{2},{y}_{2})*\mathrm{rect}\left(\frac{{x}_{2}}{{w}_{x}}\right)\mathrm{rect}\left(\frac{{y}_{2}}{{w}_{y}}\right)$$## (37)

$$\delta (x-{x}^{\prime})=\underset{w\to 0}{\mathrm{lim}}\frac{1}{|w|}p\left(\frac{x-{x}^{\prime}}{w}\right)$$^{43}and $p(x)$ is a pulse-like function {e.g., the rectangle function [cf. Eq. (32)]}. Note that in Eq. (35), * denotes 2-D convolution, such that

## (38)

$$V(x,y)*W(x,y)={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}V({x}^{\prime},{y}^{\prime})W(x-{x}^{\prime},y-{y}^{\prime})\mathrm{d}{x}^{\prime}\text{\hspace{0.17em}}\mathrm{d}{y}^{\prime},$$From Eqs. (30)–(38), we can gain access to an estimate of the signal complex field ${U}_{S}({x}_{1},{y}_{1})$ that exists in the exit-pupil plane of the imaging system [cf. Fig. 2 and Eq. (26)]. First, we let ${x}_{2}=\lambda f{\nu}_{x}$ and ${y}_{2}=\lambda f{\nu}_{y}$ and apply a 2-D inverse Fourier transformation to Eq. (34), such that

## (39)

$${\mathcal{F}}^{-1}{\{{D}_{H}(\lambda f{\nu}_{x},\lambda f{\nu}_{y})\}}_{{x}_{1},{y}_{1}}=\frac{1}{{\lambda}^{2}{f}^{2}}{\tilde{D}}_{H}(\frac{-{x}_{1}}{\lambda f},\frac{-{y}_{1}}{\lambda f})\phantom{\rule{0ex}{0ex}}=\frac{\eta T}{h\nu}{\mathcal{F}}^{-1}{\{{\tilde{I}}_{H}(\lambda f{\nu}_{x},\lambda f{\nu}_{y})\}}_{{x}_{1},{y}_{1}}\phantom{\rule{0ex}{0ex}}{w}_{x}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{w}_{x}}{\lambda f}{x}_{1}\right){w}_{y}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{w}_{y}}{\lambda f}{y}_{1}\right)\phantom{\rule{0ex}{0ex}}*\frac{1}{\lambda f}\mathrm{comb}\left(\frac{{x}_{s}}{\lambda f}{x}_{1}\right)\frac{1}{\lambda f}\mathrm{comb}\left(\frac{{y}_{s}}{\lambda f}{y}_{1}\right)\phantom{\rule{0ex}{0ex}}*\frac{N{x}_{s}}{\lambda f}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{N{x}_{s}}{\lambda f}{x}_{1}\right)\frac{M{y}_{s}}{\lambda f}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{M{y}_{s}}{\lambda f}{y}_{1}\right),$$## (40)

$${\mathcal{F}}^{-1}{\{{\tilde{I}}_{H}(\lambda f{\nu}_{x},\lambda f{\nu}_{y})\}}_{{x}_{1},{y}_{1}}={\mathcal{F}}^{-1}{\{{|{U}_{S}(\lambda f{\nu}_{x},\lambda f{\nu}_{y})|}^{2}\}}_{{x}_{1},{y}_{1}}\phantom{\rule{0ex}{0ex}}+{\mathcal{F}}^{-1}{\{{|{U}_{R}(\lambda f{\nu}_{x},\lambda f{\nu}_{y})|}^{2}\}}_{{x}_{1},{y}_{1}}\phantom{\rule{0ex}{0ex}}+{\mathcal{F}}^{-1}{\{{U}_{S}(\lambda f{\nu}_{x},\lambda f{\nu}_{y}){U}_{R}^{*}(\lambda f{\nu}_{x},\lambda f{\nu}_{y})\}}_{{x}_{1},{y}_{1}}\phantom{\rule{0ex}{0ex}}+{\mathcal{F}}^{-1}{\{{U}_{R}(\lambda f{\nu}_{x},\lambda f{\nu}_{y}){U}_{S}^{*}(\lambda f{\nu}_{x},\lambda f{\nu}_{y})\}}_{{x}_{1},{y}_{1}},$$## (41)

$${\mathcal{F}}^{-1}{\{{\tilde{I}}_{H}(\lambda f{\nu}_{x},\lambda f{\nu}_{y})\}}_{{x}_{1},{y}_{1}}=\frac{1}{{\lambda}^{2}{f}^{2}}{U}_{S}({x}_{1},{y}_{1})*{U}_{S}^{*}(-{x}_{1},-{y}_{1})\phantom{\rule{0ex}{0ex}}+{|{A}_{R}|}^{2}\delta ({x}_{1})\delta ({y}_{1})\phantom{\rule{0ex}{0ex}}+\frac{{A}_{R}^{*}{\mathrm{e}}^{jkf}}{j\lambda f}{U}_{S}({x}_{1}-{x}_{R},{y}_{1}-{y}_{R})\phantom{\rule{0ex}{0ex}}-\frac{{A}_{R}{\mathrm{e}}^{-jkf}}{j\lambda f}{U}_{S}^{*}({x}_{1}+{x}_{R},{y}_{1}+{y}_{R}).$$Substituting Eq. (41) into Eq. (39), we obtain the following result after rearranging the special functions:

## (42)

$${\tilde{D}}_{H}(\frac{-{x}_{1}}{\lambda f},\frac{-{y}_{1}}{\lambda f})=\frac{\eta T}{h\nu}{w}_{x}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{w}_{x}}{\lambda f}{x}_{1}\right){w}_{y}\text{\hspace{0.17em}}\mathrm{sinc}\left(\frac{{w}_{y}}{\lambda f}{y}_{1}\right)\phantom{\rule{0ex}{0ex}}[\frac{1}{{\lambda}^{2}{f}^{2}}{U}_{S}({x}_{1},{y}_{1})*{U}_{S}^{*}(-{x}_{1},-{y}_{1})+|{A}_{R}{|}^{2}\delta ({x}_{1})\delta ({y}_{1})\phantom{\rule{0ex}{0ex}}+\frac{{A}_{R}^{*}{\mathrm{e}}^{jkf}}{j\lambda f}{U}_{S}({x}_{1}-{x}_{R},{y}_{1}-{y}_{R})-\frac{{A}_{R}{\mathrm{e}}^{-jkf}}{j\lambda f}{U}_{S}^{*}({x}_{1}+{x}_{R},{y}_{1}+{y}_{R})]\phantom{\rule{0ex}{0ex}}*\mathrm{comb}(\frac{{x}_{s}}{\lambda f}{x}_{1}\left)\mathrm{comb}\right(\frac{{y}_{s}}{\lambda f}{y}_{1})\phantom{\rule{0ex}{0ex}}*\frac{N{x}_{s}}{\lambda f}\text{\hspace{0.17em}}\mathrm{sinc}(\frac{N{x}_{s}}{\lambda f}{x}_{1}\left)\frac{M{y}_{s}}{\lambda f}\text{\hspace{0.17em}}\mathrm{sinc}\right(\frac{M{y}_{s}}{\lambda f}{y}_{1}),$$## Acknowledgments

The authors would like to thank Samuel T. Thurman for his careful review and insightful comments toward a draft form of this completed paper. In addition, the authors would like to thank Paul F. McManamon for his invitation to submit to a special section call of *Optical Engineering*. This research was funded by the High Energy Laser Joint Technology Office. The views expressed in this document are those of the authors and do not necessarily reflect the official policy or position of the Air Force, the Department of Defense, or the U.S. government.

## References

## Biography

**Mark F. Spencer** is a research physicist at the Air Force Research Laboratory, Directed Energy Directorate. He is also an assistant adjunct professor of optical sciences and engineering (OSE) at the Air Force Institute of Technology (AFIT), Department of Engineering Physics. He is a senior member of SPIE and received his BS in physics from the University of Redlands in 2008 and his MS and PhD in OSE from AFIT in 2011 and 2014, respectively.

**Robert A. Raynor** received his master’s degree in applied physics from the AFIT. He currently works as a research physicist at the Air Force Research Laboratory, Directed Energy Directorate. His research efforts concentrate on developing models for wavefront sensors that use digital-holographic detection and tracking sensors that use partially coherent illumination of optically rough targets.

**Matthias T. Banet** is an undergraduate student at the New Mexico Institute of Mining and Technology in Socorro, New Mexico. He is currently a summer intern at the Air Force Research Laboratory, Directed Energy Directorate. This fall, he will receive his BS degree in physics and minors in materials science and mathematics. Upon the completion of his undergraduate studies, he plans to pursue a PhD in optical sciences and engineering.

**Dan K. Marker** is currently in his 27th year of employment with the Air Force Research Laboratory, Directed Energy Directorate. His research efforts concentrate on the development of phased array imaging, phased array beam projection, optical quality polymer films, and tiled-array laser systems. He received his master’s degrees in mechanical engineering from the University of New Mexico and an MBA in finance from Webster University. He is currently the vice president of the Directed Energy Professional Society.