An extension of the fusion of interpolated frames superresolution (FIF SR) method to perform SR in the presence of atmospheric optical turbulence is presented. The goal of such processing is to improve the performance of imaging systems impacted by turbulence. We provide an optical transfer function analysis that illustrates regimes where significant degradation from both aliasing and turbulence may be present in imaging systems. This analysis demonstrates the potential need for simultaneous SR and turbulence mitigation (TM). While the FIF SR method was not originally proposed to address this joint restoration problem, we believe it is well suited for this task. We propose a variation of the FIF SR method that has a fusion parameter that allows it to transition from traditional diffraction-limited SR to pure TM with no SR as well as a continuum in between. This fusion parameter balances subpixel resolution, needed for SR, with the amount of temporal averaging, needed for TM and noise reduction. In addition, we develop a model of the interpolation blurring that results from the fusion process, as a function of this tuning parameter. The blurring model is then incorporated into the overall degradation model that is addressed in the restoration step of the FIF SR method. This innovation benefits the FIF SR method in all applications. We present a number of experimental results to demonstrate the efficacy of the FIF SR method in different levels of turbulence. Simulated imagery with known ground truth is used for a detailed quantitative analysis. Three real infrared image sequences are also used. Two of these include bar targets that allow for a quantitative resolution enhancement assessment.
Designing imaging systems involves a complex trade space. The focal plane array detector pitch determines the spatial sampling frequency, and the -number of the optics and wavelength determine the diffraction-limited optical cutoff frequency. The Nyquist sampling criterion dictates that the sampling frequency must exceed twice the cutoff frequency in order to guarantee that there will be no aliasing in the acquired imagery. The desire for a wide field of view and high optical resolution calls for a low -number and high optical cut-off frequency. However, practical limitations associated with a small detector size often lead to employing focal plane arrays that do not meet the Nyquist criterion under diffraction-limited imaging conditions.1 Such undersampling may prevent the imaging system from achieving the full resolution afforded by the optics. A wide variety of superresolution (SR) algorithms have been proposed to successfully address such undersampling using multiple unique frames with relative motion between the scene and camera.2,3
Another potential source of image degradation, especially for long-range imaging, is atmospheric optical turbulence. Random fluctuations in the index of refraction along the optical path result in spatially and temporally varying image blur and warping.4–6 A variety of turbulence mitigation (TM) methods have been developed to address this issue by processing multiple short exposure images.7–9 There is an important connection between optical turbulence and the undersampling problem. With heavy turbulence, the blurring acts as an anti-aliasing low-pass filter that lowers the effective cut-off frequency of the optical system and limits or prevents aliasing. However, in light to moderate turbulence, significant aliasing artifacts may be present simultaneously with the turbulence degradation. In Sec. 3 of this paper, we provide an optical transfer function (OTF) analysis to illustrate this point and demonstrate the potential need for simultaneous SR and TM. This joint restoration problem has received more limited attention in the literature than SR and TM alone.10–15
It is interesting to compare and contrast how SR and TM methods exploit multiple frames for restoration. In the case of SR, spatial sampling diversity is provided by the multiple input frames. For a static scene, video provides “excess” temporal resolution that can be traded for increased single-frame spatial resolution to combat the system’s native undersampling. In the case of spatial-domain TM, the use of multiple frames is important for two main reasons. The first is that averaging multiple globally registered frames can help to reveal the correct scene geometry that is otherwise distorted in each frame by the quasiperiodic and spatially-varying warping from turbulence. The second benefit of using multiple frames for TM is that temporal averaging of registered input frames tends to produce a prototype image with a more spatially invariant residual blur than the individual frames.9,16 A spatially invariant point spread function (PSF) blur can be addressed with a relatively simple restoration method, such as a Wiener filter. Thus, one can see that averaging in some form is key for spatial-domain TM. However, this averaging is at odds with how SR exploits the unique information in each frame for sampling diversity. Thus, any method that seeks to accomplish both SR and TM jointly needs to balance temporal averaging with preservation of spatial sampling diversity.
A framework that is well suited to balance the factors described above is the fusion of interpolated frames (FIF) SR method, recently proposed by two of the current authors.17 The FIF SR method is a multiframe SR algorithm that fuses interpolated versions of each low resolution (LR) input frame. The fusion weights are based on the subpixel alignment for each interpolated pixel, any color information that may be available, and estimated local scene motion activity. A Wiener filter is then applied to the fused image to address the modeled OTF blurring. Here, we extend the FIF SR approach in two important ways to allow it to effectively treat SR and TM simultaneously. First, we incorporate an atmospheric OTF model to the overall degradation model used for the restoration step. Specifically, we use the approach that incorporates an estimate of the level of image registration, or atmospheric tilt reduction, as described by Hardie et al.9 The second key aspect of our extended FIF SR method is that we employ a tunable subpixel fusion weighting parameter that may be set according to the level of turbulence. Under light turbulence, the parameter may set so as to provide a large weight only to pixels that lie very close to the high resolution (HR) grid. This provides maximum SR and minimum interpolation blurring. However, it also effectively reduces the amount of temporal averaging. When the turbulence is stronger, the parameter may be set to be less selective in the weighting to increase the effective averaging of the frames. While this is beneficial for TM, there tends to be more interpolation blurring in the fused image in this case. To help address this, we introduce and model an interpolation blurring OTF component. Thus, the overall OTF model used here in the restoration step smartly incorporates knowledge of key aspects of the algorithm processing steps that precede the OTF restoration. In particular, it incorporates the level of registration that impacts the residual atmospheric blurring and the level of subpixel weighting that impacts the interpolation blur. We believe this smart OTF model allows the Wiener filter to better restore the fused image and provide improved performance.
We would like to note that this paper represents a greatly expanded version of recent conference papers by the same authors.18,19 Major additions include the development of the interpolation OTF, all new and expanded simulation results, and new results with real image sequences. The remainder of this paper is organized as follows. In Sec. 2, we describe the FIF SR method and present the development that models the PSF and OTF of the interpolation and fusion steps. In Sec. 3, we introduce the overall OTF model used by the FIF SR method that incorporates diffraction, turbulence, and interpolation blurring. This section also includes an analysis of the impact of turbulence on undersampling and aliasing. Experimental results are presented in Sec. 4. These results include both simulated and real imagery. The simulated data are generated with a numerical wave propagation method and allow for a detailed quantitative analysis with ground truth.20 This analysis shows that the FIF SR method can effectively perform TM and SR simultaneously for a range of scenarios. Furthermore, one particularly interesting result we show is that the tilt variance from turbulence can actually improve SR results, compared with no turbulence, when no camera platform motion is present. In this case, the random wavefront tilts provide the critical relative motion between the scene and camera for SR sampling diversity, as described by Fishbain et al.10 and Yaroslavsky et al.11 We believe our simulation study is the first quantitative error analysis of its kind to demonstrate this phenomenon in the literature. Finally, we offer conclusions in Sec. 5.
Fusion of Interpolated Frames Superresolution
Figure 1 shows a block diagram summarizing the FIF SR method, originally from Karch and Hardie,17 and adapted here for joint SR and TM. In our implementation, short-exposure LR observed frames are registered using one of a variety of registration methods appropriate to the application. Next, single-frame interpolation is used to upsample the individual input images by a factor of to the Nyquist sampling grid, based on the diffraction-limited optical cut-off frequency. The interpolated images are formed in alignment to a common reference frame or frame average.9,16 We shall consider different interpolation kernels here in our analysis. The next block is where the multiframe fusion takes place. Each interpolated pixel in each frame gets a weight based on subpixel alignment for each interpolated pixel. Finally, a Wiener filter is applied to the fused image to produce a single restored image. The Wiener filter makes use of a PSF model that incorporates atmospheric parameters, optical system parameters, level of tilt reduction in the registration step, and the interpolation blurring.
Note that the FIF SR algorithm may be viewed as a type of nonuniform interpolation SR method.2 The FIF SR method and other nonuniform interpolation SR methods simplify the SR problem by separating it into registration and nonuniform interpolation followed by OTF restoration. Applying the Wiener filter for OTF restoration after the nonuniform interpolation is justified by the assumption that the warping and burring operators commute in the degradation observation model. In the case of translational motion, the assumption is fully valid.2 For some other types of motion, such as affine, the warping and burring operators have been shown by Hardie et al.21 to approximately commute.
Let us now consider the heart of the FIF SR method, which is the fusion step. Assume there are input frames and one fused output frame. We define the interpolated input pixel in the frame as , and this corresponds to a sample of in Fig. 1. Let the fused pixel on the Nyquist grid be denoted as . This represents a sample of in Fig. 1. With this notation, the fused image pixels are given byFig. 2.
Note that Eq. (2) gives a Gaussian weighting, with more weight given to frames that have a lower distance to an observed pixel. This is because larger distances tend to give larger interpolation errors.22 The weighting function is plotted in Figs. 3(a) and 3(b) for and 0.10, respectively. One can see that increasing makes the fusion less selective, giving increased weight to frames with larger interpolation distances. In the limiting case of , equal weight is given to all frames and the fusion becomes a simple temporal average. Using a larger provides more temporal averaging in the fusion process that can help to reveal the proper scene geometry in moderate to heavy turbulence, and make the atmospheric blurring more spatially invariant in the fused image.9,16 This also helps to attenuate temporal noise. On the other hand, a small gives a high level of selectivity, where significant weight is only given to interpolated pixels that are close to observed samples in that frame. A small would be expected to give the best SR results with minimal interpolation error, provided a sufficient diversity of samples is available with a high signal-to-noise ratio.17
The tunability provided by the parameter sets it apart from most other nonuniform interpolation SR methods2 and makes it well suited to perform SR in the presence of turbulence. For example, it is interesting to compare the FIF SR approach of fusing interpolated frames with methods that use binning. Binning methods register the input LR frames and populate an HR grid by putting the LR pixels into discrete bins on the HR grid.2,21 LR pixels are assigned to the nearest HR bin in a simple quantization process or by interpolation to the nearest HR bin. With such binning methods, care must be taken to address the very common scenario of empty bins.21 By fusing interpolated frames and using Gaussian weighting, the FIF SR method does not have this issue and will never have any empty output pixels. Using a different weighting function, however, it is possible for the FIF SR framework to perform fusion that is equivalent to binning. Specifically, this binning operation is achieved with the weighting function,Fig. 3(c) for . While we do not recommend this weighting function for joint SR and TM because of lack of tunability, we believe it is insightful to see the relationship between binning and Gaussian weighting of interpolated frames in Fig. 3.
Further insight may be gained with regard to the parameter by considering what we term the “averaging power” of the fusion. By this, we mean the variance reduction factor for independent and identically distributed (i.i.d.) temporal samples, relative to a standard average. Consider the fusion of i.i.d. temporal samples with variance . The output variance would depend on the fusion weights and is given byFig. 4 as a function of . Note that smaller values of produce a smaller averaging power. This is a result of the greater spatial selectivity. Interestingly, we have observed good algorithm performance in many cases near the inflection point of the curve in Fig. 4 near . For comparison, the averaging power for binning method given by Eq. (3) and shown in Fig. 3(c), is simply for all pixels. This result is based on the fraction of frames expected to fall into each bin under a uniform subpixel displacement assumption.
It should be noted that the original FIF SR paper17 considers other weighting components, in addition to that in Eq. (2). One additional weighting term is designed to exploit color correlation. Another term is included to address in-scene motion to minimize motion blur and distortion.17 However, since our focus here is on the simultaneous SR and TM, we limit the scenarios under consideration to single-band imagery (i.e., no color), and no in-scene motion. Treating color and in-scene motion are certainly important problems that we hope to address in future work, in the context of joint TM and SR.
Interpolation Impulse Response
In this subsection, we derive an OTF model for the fused interpolation blur. First, note that each sample in in Fig. 1 may be expressed as a weighted sum of observed pixels. The specific weighting will depend on the exact motion and interpolation kernel used as well as the weighting from Eq. (2). For anything other than rigid translational motion, the weights will vary spatially.21,23 However, if we assume a large number of frames with motion significantly greater than one LR pixel, we believe it is reasonable to assume a uniform subpixel distribution of motion. This allows us to model the interpolation blur with a spatially invariant impulse response. This is very similar to how the turbulence blur can be treated as spatially invariant after fusing a large number of frames.9,16 Computing a spatially invariant interpolation impulse response for the fused image is powerful in that it can be incorporated into the overall degradation model, as shown in Fig. 1. This allows us to mitigate some of the nonideal aspects of the interpolation step. It also adds to our overall understanding of the FIF SR method and further informs the selection of .
The choice of interpolation kernel impacts the resulting interpolation blur model. Consider three common 1-D interpolation kernels24 that are illustrated in Fig. 5. The zero-order hold (ZOH), or nearest neighbor, interpolation kernel is given by
Next, let the LR sampling function (i.e., integers corresponding to LR sample positions) be expressed as
Using a 1-D version of the weighting function from Eq. (2), we have . Putting this into Eq. (14) gives25
Examples of the frame-averaged interpolation impulse response from Eq. (15) are plotted in Fig. 6 for ZOH, linear, and cubic interpolation kernels. One can see that with , the impulse response is simply the interpolation kernel, as shown in Eq. (16). As gets smaller, the width of the impulse response also gets smaller. The interpolation OTFs are shown in Fig. 7 for the same kernels and values of as those in Fig. 6. Note that in these plots, the folding frequency for SR with upsampling by a factor of is cycles/(LR spacing). From Fig. 7, it is clear that the interpolation blur for all but very small can be quite significant. The good news is that much of the frequency content, while attenuated, is available for restoration, provided that the interpolation OTF is included in overall degradation model. To allow for SR restoration of the higher frequencies, it is important to use the smallest possible, while meeting the other algorithm requirements.
Atmospheric Optical Transfer Function and Aliasing
Overall OTF Model
In this subsection, we present the overall OTF model used for the FIF SR restoration. The model is spatially invariant and is meant to capture the degradation as seen in the fusion image in Fig. 1. The spatial invariance is justified because represents a weighted sum of many short-exposure images.9,16 The overall model and its components are expressed as2.2. For an optical system with circular exit pupil, the diffraction-limited OTF is given by26 1 Finally, the atmospheric turbulence model is based on that originally derived by Fried:5
In Fried’s derivation, in Eq. (21) gives the long exposure OTF. The average tilt-corrected short-exposure OTF under near field conditions is given by Eq. (21) when . The near field condition is defined by Fried to be , where is the optical path length. As shown by Tofsted’s analysis,27,28 most sensors operate in regimes, where Fried’s near field condition should be invoked for the short exposure case (even for a far-field optical condition). Since we are applying this OTF to partially tilt-corrected imagery resulting from the potentially imperfect registration step in Fig. 1, we follow the approach in Hardie et al.9 and treat the parameter as a continuous tilt reduction factor. At the extremes, a value of is used for no registration (i.e., the long exposure OTF), and a value of would be used for ideal registration (i.e., the short-exposure OTF). This parameter may be estimated based on the type of registration used.9
Referring to the block diagram in Fig. 1, the ideal image, , relates to the fusion image asFig. 1, a Wiener filter29 is employed in an effort to provide deconvolution of the overall blurring impulse response in Eq. (23). The Wiener frequency response is given by
An example OTF from Eq. (19), showing the various components, is provided in Fig. 8 for and using cubic interpolation. The optical parameters for this figure are listed in Table 1. The parameters in Table 1 are also used for the simulated data presented in Sec. 4.1. Also shown in Fig. 8 is the native sensor folding frequency for this particular example. Note that any signal energy above the folding frequency will be aliased during sampling. It is interesting to note from Fig. 8 that the frame-averaged interpolation OTF, (shown in red), is comparable to the isolated atmospheric OTF from Eq. (21) (shown in green). Thus, it is clear that with values on this order, mitigating the impact of the interpolation is important for restoration, especially for SR, where frequency content out to the diffraction-limited cutoff frequency is sought.
Optical parameters used for the OTF plot in Fig. 8 and for the simulation results in Sec. 4.1.
|Optical cut-off frequency||230.78 cycles/mm|
|Sampling frequency||153.85 cycles/mm|
|Folding frequency||76.92 cycles/mm|
|Nyquist pixel spacing (focal plane)|
|Nyquist pixel spacing (object plane)|
|Pixel pitch (focal plane)|
|Pixel pitch (object plane)|
As can be seen in Eqs. (19)–(21), the parametric form of the OTF used for FIF SR restoration has several parameters. In practice, it is generally reasonable to assume that the optical parameters, , , and , would be known a priori. The remaining parameters are , , and . We have observed generally good results with for a wide range of datasets. The tilt reduction parameter, , typically ranges from 0 for no registration to about 0.5 for optical flow registration. Imperfect values of can usually be reasonably well compensated for by altering the employed value of . Thus, the bottom line is that OTF estimation and resulting restoration performance are mainly sensitive to . Fixing the other parameters and searching over the space can often lead to very useful results. Subjective evaluation or other no-reference metrics30 can be used for selection. Other scene-based methods for estimating can be found in the literature.31,32
Aliasing and Turbulence
As noted earlier, the Nyquist criterion dictates that the sampling frequency be greater than two times the highest spatial frequency in the image to guarantee no aliasing. The sampling frequency is given by , where is the detector pitch, and the highest spatial frequency is limited by the diffraction-limited optical cutoff frequency from Eq. (20). Thus, the diffraction-limited sampling status of an imaging system can be characterized by the diffraction-limited sampling factor parameter,33
While the metric in Eq. (25) is very useful, it does not take atmospheric turbulence into account. As can be seen from Eq. (21), the atmospheric optical turbulence acts like a nonideal low pass filter with no absolute cutoff frequency. Nevertheless, the OTF signal energy above the folding frequency diminishes in Eq. (21) as goes down, reducing the potential aliasing for a given sampling frequency. In order to capture this effect, we propose a modified parameter, where we substitute for the aperture, . This gives rise to what we term the turbulence-limited sampling factor,
Several overall OTFs are shown in Fig. 9 to illustrate the relationship between the focal length and for a fixed diffraction-limited sampling factor parameter of . For these plots, the optical parameters in Table 1 are used along with and . Focal lengths of , , and are shown in Figs. 9(a)–9(c), respectively. Note that with the short focal length (wide field of view), the overall OTF is less sensitive to the atmospheric effects represented by . Each of the OTF curves in Fig. 9(a) has a significant amount of the OTF above the folding frequency. Furthermore, we see in this plot that the level of aliasing is more often limited by diffraction than turbulence (i.e., ). However, with increased focal length (reduced field of view), the effects of turbulence are magnified, as can be seen in Fig. 9(c). For the higher levels of turbulence in Fig. 9(c), the atmosphere acts as an anti-aliasing low-pass filter, essentially eliminating the possibility of aliasing. Most of the curves in this figure show that aliasing is limited more by turbulence than diffraction (i.e., ).
The plots in Fig. 9 also illustrate that many scenarios exist where both turbulence and aliasing may be present. With short focal lengths, light to moderate turbulence is less problematic and traditional SR may be appropriate. For long focal length systems, all but light turbulence tends to effectively eliminate aliasing, making traditional TM appropriate. However, both aliasing and turbulence degradations are significant factors in heavy turbulence with short focal lengths, light turbulence with long focal lengths, and moderate focal lengths with a wide range of turbulence. This demonstrates the importance and relevance of developing algorithms capable of performing SR in the presence of turbulence, such as the one presented here.
In this section, we present a number of experimental results to demonstrate the efficacy of the FIF SR method with both undersampling and turbulence. The results in Sec. 4.1 use simulated data that allow for a quantitative performance analysis. The results in Sec. 4.2 use real data from three different sensors.
The simulated data have been generated using the anisoplanatic optical turbulence simulation tool recently developed by Hardie et al.20 The simulation tool uses numerical wave propagation and has performed well reproducing key image statistics in validation studies.20 The specific simulation presented here is novel in that we are emulating an undersampled imaging system. The turbulence degraded images are first simulated at the Nyquist rate for the diffraction-limited optical system.20 Next, however, we simulate detector integration and follow this with downsampling by a factor of . After downsampling, we introduce additive Gaussian noise with a standard deviation of two digital units to imagery having a 0-255 original dynamic range. The optical system parameters used in the simulation are listed in Table 1, and the simulation parameters are listed in Table 2. We have simulated seven levels of turbulence, each with temporally independent frames.
Simulation parameters used for the simulated imagery in Sec. 4.1.
|Cropped screen samples|
|Propagation screen width|
|Pupil plane point spread|
|Propagation sample spacing|
|Number of phase screens||(9 nonzero)|
|Phase screen type||Modified von Kármán with subharmonics|
|Image size (pixels)|
|Number of frames|
|Image size (object plane)|
|Dynamic range||0-255 digital units|
|Noise standard deviation||digital units|
Quantitative results for two different truth images are provided in Tables 3 and 4 for several algorithm variations. Table 3 is for the Kodak lighthouse image,17 and Table 4 is for the stream and bridge image.9 The metric we use to evaluate the simulated data results is peak signal-to-noise-ratio (PSNR). The PSNRs for the lighthouse image are also plotted in Fig. 10. For the results with parameter optimization, the parameters are found by a search to maximize the PSNR and these parameters are listed in Table 5. Note that results are shown with and without camera jitter. When camera jitter is on, we simulate camera platform motion by providing additional uniform random subpixel shifts between frames with no motion blur. The FIF SR results include the full OTF model, except where no restoration is indicated. The true lighthouse image and three levels of degradation are shown in Fig. 11. Various restored images are shown in Fig. 12 using the optimum parameters in Table 5. Global affine registration to the average frame is used for all of the FIF SR results reported on the simulated data.
Lighthouse image PSNR (dB) results using simulated K=200 frames with ση=2.0 and M=3.
|Method (camera Jitter on)||Cn2×10−15 (m−2/3)|
|Average + bicubic||22.43||22.41||22.31||22.13||21.89||21.28||20.69|
|Affine + bicubic + average||22.74||22.72||22.62||22.46||22.18||21.61||21.01|
|FIF SR (, )||28.46||28.03||27.40||26.75||25.44||23.87||22.69|
|FIF SR (, , no rest.)||27.41||27.22||26.65||26.15||25.18||23.83||22.73|
|FIF SR (, optimum )||28.57||28.07||27.41||26.77||25.44||23.90||22.76|
|Method (camera jitter off)|
|Average + bicubic||22.70||22.68||22.56||22.37||22.04||21.37||20.75|
|Affine + bicubic + average||22.70||22.71||22.62||22.47||22.20||21.61||21.02|
|FIF SR (, )||23.46||24.06||24.71||25.15||25.10||23.87||22.74|
|FIF SR (, , no rest.)||23.42||24.11||24.72||25.05||24.91||23.80||22.77|
|FIF SR (, optimum )||23.46||25.07||25.68||25.63||25.17||23.89||22.79|
Stream bridge image PSNR (dB) results using 200 simulated frames with ση=2.0 and M=3.
|Method (camera Jitter on)||Cn2×10−15 (m−2/3)|
|Average + bicubic||23.17||23.15||22.99||22.69||22.32||21.36||20.47|
|Affine + bicubic + average||23.68||23.68||23.51||23.26||22.81||21.89||20.96|
|FIF SR (, )||29.64||29.15||28.67||27.94||26.98||25.58||23.99|
|FIF SR (, , no rest)||28.66||28.45||28.02||27.46||26.70||25.48||24.05|
|FIF SR (, optimum )||29.74||29.16||28.67||27.94||26.98||25.59||24.06|
|Method (camera jitter off)|
|Average + bicubic||23.60||23.57||23.37||23.07||22.55||21.50||20.57|
|Affine + bicubic + average||23.60||23.62||23.48||23.25||22.83||21.89||20.97|
|FIF SR (, )||24.87||25.47||26.03||26.54||26.71||25.52||24.06|
|FIF SR (, , no rest)||24.75||25.37||25.94||26.38||26.47||25.44||24.11|
|FIF SR (, optimum )||24.98||26.17||26.80||26.98||26.76||25.52||24.12|
Optimum FIF SR parameters for lighthouse image results.
The quantitative results show that the FIF SR method provides a significant boost over simple methods, such as single-frame interpolation and simple averages. The boost is seen over a wide range of turbulence levels. Another very interesting phenomenon can be seen in Fig. 10. Note that when platform jitter is turned off, increasing the turbulence level from zero actually increases the PSNR of the SR output. This may seem counterintuitive, as turbulence is generally considered to be a degradation, and not a benefit. However, what we see is that the wavefront tilt variance provided by light turbulence acts to shift the image relative to the camera, allowing for the necessary sampling diversity for SR. This phenomenon was first described by Fishbain et al.10 and Yaroslavsky et al.11 As the turbulence level gets higher, we see in Fig. 10 that the degrading impact of the turbulence outweighs this sampling benefit, and the PSNR drops. Also, note that this effect is not seen when there is camera platform jitter. This is because the platform jitter provides sampling diversity more effectively than the turbulence and without the turbulence blurring. Platform jitter outperforms the corresponding no jitter scenario, and turbulence is never a benefit when platform jitter is present. As turbulence levels increase, we do see the benefit of jitter diminish as SR becomes increasingly difficult. At high turbulence levels, very little signal energy is available above the folding frequency to be recovered. Another point of interest in Tables 3 and 4, as well as Fig. 10, is that including the interpolation OTF into the Wiener filter is a clear benefit. When the optimum for restoration is used, this always outperforms the “no restoration” case. To the best of our knowledge, no simulation study and quantitative error analysis of this kind for joint TM and SR has been reported in the literature previously.
Subjective analysis of the images in Figs. 11 and 12 appears in line with the quantitative results. Figures 12(a) and 12(b) show the FIF SR output with no turbulence. Figure 12(a) has no camera jitter and Fig. 12(b) includes camera jitter. With no jitter and no turbulence in Fig. 12(a), there is a lack of sampling diversity and little is achieved in the way of true SR. Moiré patterns on the fence and jagged edges on the roof line are still quite visible here. On the other hand, Fig. 12(b) shows significant SR enhancement on the fence and roof edges. This result represents traditional multiframe SR without turbulence and is well understood.2
Perhaps, the most interesting result is that in Fig. 12(c). Here, we have light turbulence and no camera jitter. This result appears to be far better than the no-turbulence no-jitter case in Fig. 12(a). A significant amount of aliasing reduction is exhibited here, as a result of entirely turbulence-induced motion. This result is nearly as good as that with additional camera jitter in Fig. 12(d). Finally, at high turbulence levels, platform jitter makes little difference, as seen by comparing Figs. 12(e) and 12(f). It is also clear, at this high level of turbulence, that details in the fence and elsewhere are lost, but not aliased, as a result of the low-pass filtering from the turbulence.
In this subsection, we present results for real data using three different sensors. A summary of the data and processing parameters for each of the datasets is provided in Table 6. The Fried parameters and noise-to-signal ratios (NSRs) have been selected based on subjective evaluation of the results. In the case of the truck sequence, an edge target is used to estimate .
Real image data parameters for the results in Sec. 4.2.
|Diffraction-limited sampling ()||0.40||1.95||0.47|
|Turbulence-limited sampling ()||1.20||3.44||0.20|
|Fried ()||0.080 m||0.0325 m||0.050 m|
|Tilt reduction ()||0.25||0.25 (0.50)||0.25|
|FIF interpolation ()||0.25||0.25|
|Number of frames ()||200||500||100|
The first dataset (coaster) is shown in Fig. 13. These data are from a midwave infrared (MWIR) sensor and show a portion of a wooden roller coaster with a significant amount of turbulence and aliasing ( and ). Single-frame bicubic interpolation images of two regions of interest (ROIs) are shown in Figs. 13(a) and 13(c). The corresponding FIF SR outputs are shown in Figs. 13(b) and 13(d), using the parameters listed in Table 6. The impact of turbulence is quite evident in the inputs frames, such as the warped handrail visible in Fig. 13(a) near the power line pole. At the same time, aliasing artifacts are also clearly visible in the form of Moiré patterns on the groups of suspended wires. In contrast, the handrail appears to have corrected geometry in Figs. 13(b) and 13(d), as a result of the FIF SR processing. Furthermore, the individual suspended wires are clearly distinguishable and free from aliasing artifacts in the FIF SR output. We believe these data provide an excellent demonstration of successful joint TM and SR.
The next dataset (truck) shows a truck and bar target in heavy turbulence imaged with a near IR sensor. In contrast with the previous example, this sensor is nearly Nyquist sampled under diffraction-limited conditions (). With turbulence added, no significant aliasing is expected (). However, we still apply upsampling of for our restoration. Single-frame bicubic interpolation is shown in Fig. 14(a), the FIF SR output is shown with affine registration in Fig. 14(b). The result using block matching algorithm (BMA) optical flow registration9 is shown in Fig. 14(c). Both restorations appear to be significantly better than the single-frame interpolation, but the bar target with BMA does appear noticeably better because of the high level of local warping. This finding is consistent with previous studies using these data.9 One noticeable artifact is the ringing on the truck cab. This is a result of the Wiener filter operating on solar glint. Ringing is also present, to a lesser degree, in the restored images near strong edges. We are currently exploring methods for reducing these artifacts. One possibility is to employ an adaptive Wiener filter with spatially varying NSR.34,35
The final dataset (airborne) is a MWIR dataset of a bar target acquired from an airborne platform.36 These data have been used for SR studies in prior work,17,21 but without considering turbulence. The bar target is a series of four bar patterns. The scaling factor between bar groups is designed to be . In contrast to the truck sequence, aliasing is a much more significant problem than turbulence in the airborne data with and . Since the platform is moving between frames and the scene is approximately planar, we use global perspective registration.21 Single-frame bicubic interpolation is shown in Fig. 15(a) with . The corresponding FIF SR output is shown in Fig. 15(b). The Moiré patterns are quite evident within the bicubic image in Fig. 15(a). After FIF SR processing, there appears to be an approximately resolution enhancement based on the resolvable bar patterns.
In this paper, we have provided a study of SR in the presence of optical turbulence. The OTF analysis presented in Sec. 3 demonstrates scenarios, where significant levels of aliasing may be present simultaneously along with turbulence degradation. This provides motivation for the development of restoration methods that can provide joint SR and TM, such as the FIF SR method presented.
We have also introduced a turbulence-limited sampling parameter in Sec. 3.2, , to complement the previously defined diffraction-limited sampling factor . We believe is helpful in describing how a given turbulence level impacts the potential for aliasing in an imaging system. A larger indicates a lower pseudo cut-off frequency from turbulence, relative to the sampling frequency, and reduced level of potential aliasing. Also, the maximum of and represents the dominant factor in limiting aliasing (i.e., either diffraction or turbulence).
In addition, we have extended the FIF SR method with an OTF model to equip it to operate in the presence of turbulence. The atmospheric component has a parameter, , that accounts for the level of tilt reduction provided by the registration step. In Sec. 2.2, we have derived an OTF component that models the blurring from the FIFs, as a function of the parameter . This allows us to mitigate the blurring from the interpolation operation. This compensation is particularly important for higher values of that might be employed when addressing turbulence. Together and parameters give the OTF model a high level of flexibility to effectively address a wide range of SR and TM scenarios.
Our experimental results in Sec. 4 include a ground-truth based quantitative error analysis with simulated images generated with a numerical wave propagation method.20 The results demonstrate quantitatively that the FIF SR method proposed is able to effectively perform SR and TM in the scenarios considered. One particularly interesting result presented in Sec. 4.1 shows that turbulence-induced warping motion alone can provide the sampling diversity necessary for effective multiframe SR. However, our results also show that camera platform motion or jitter, when present, appears to be more effective at this task. The real data results in Sec. 4.2 show the versatility of the FIF SR method with three distinct scenarios. The truck sequence is dominated by turbulence. The airborne sequence is dominated by aliasing. Finally, the coaster data includes a significant combination of both turbulence and aliasing.
This work has been supported in part under Air Force Research Labs (AFRL) Award No. FA8650-17-D-1801. Approved for public release under case number 88ABW-2019-2309.
Russell C. Hardie is a full professor in the Department of Electrical and Computer Engineering at the University of Dayton and holds a joint appointment in the Department of Electro-Optics and Photonics. He received the University of Dayton’s top university-wide teaching award, the 2006 Alumni Award in teaching. He also received the Rudolf Kingslake Medal and Prize from SPIE in 1998 for the work on superresolution. His research interests include a wide range of topics in digital signal and image processing, medical image processing, and machine learning.
Michael Rucci is a research engineer at the Air Force Research Laboratory, Wright-Patterson AFB, Ohio. His current research includes day/night passive imaging, turbulence modeling and simulation, and image processing. He received his MS and BS degrees in electrical engineering from the University of Dayton in 2014 and 2012, respectively.
Barry K. Karch is a principal research electronics engineer in the Multispectral Sensing & Detection Division, Sensors Directorate of the Air Force Research Laboratory, Wright-Patterson AFB OH. He received his BS degree in electrical engineering (1987), MS degree in electro-optics and electrical engineering (1992/1994), and PhD degree in electrical engineering (2015) from the University of Dayton, Dayton, Ohio. He has worked in the areas of EO/IR remote sensor system and processing development for 29 years.
Alexander J. Dapore is a senior image processing engineer at L3Harris Technologies. He received his BSEE and MSEE degrees from the University of Illinois at Urbana-Champaign in 2008 and 2010, respectively. He has worked on research and development projects in many areas of digital image processing. His specific areas of interest are image restoration, image enhancement, object/threat detection and tracking, multiview computer vision, and the real-time implementation of digital image processing algorithms on GPGPU platforms.
Douglas R. Droege is a director of advanced programs for the ISR Systems Segment of L3Harris Technologies. He has 19 years of experience designing military infrared systems and in his current role, he develops advanced system concepts involving both imaging and electronic warfare technologies. He holds a PhD degree in electrical engineering from the University of Dayton. He received the 2013 L3 Corporate Engineer of the Year award and the 2012 L3 Integrated Sensor Systems Leadership award.
Joseph C. French is a research engineer in electro/optics with Leidos and currently supports the Air Force Research Laboratory at Wright-Patterson AFB, Ohio. He received his BS and MS degrees in electrical engineering from Missouri University of Science and Technology, Rolla, Missouri, and a PhD degree in electrical engineering from the University of Dayton, Dayton, Ohio, in 2016. His work includes infrared signatures, optical system modeling, camera calibration and model generation, georeferencing uncertainty analysis, and orthorectification/image projection.