|
1.IntroductionWhen imaging at long ranges through the atmosphere, the acquired images are highly susceptible to degradations from atmospheric turbulence.1–3 Fluctuations in the index of refraction along the optical path length, driven by temperature and pressure variations, give rise to spatially and temporally varying blur and warping. This is a well-researched area with respect to astronomical imaging.3 In astronomical imaging, with narrow fields of view, the degradation caused by the atmosphere can usually be modeled as isoplanatic. That is, the atmospheric effects are uniform across the image. This gives rise to warping that is a global image shift, and the blurring can be modeled with a spatially invariant point spread function (PSF). Wide field-of-view imaging, at long ranges through the atmosphere, generally leads to anisoplanatic imaging conditions. Here, the atmospheric PSF varies significantly across the field of view of the imaging sensor. Adaptive optics has proven to be effective in treating the isoplanatic problem.4 However, for the anisoplanatic problem, mitigation methods are generally based on acquiring and digitally processing a sequence of short-exposure (SE) images.5 Short integration time means that warping during integration is minimized, reducing a major source of blurring. However, under anisoplanatic imaging conditions, there is still temporally and spatially varying warping and blurring to contend with. Note that with long-exposure (LE) images, turbulence-induced image warping gets temporally integrated. While this has the advantage of “averaging out” the geometric warping, to reveal the correct scene geometry, it also leads to high levels of blurring that may be difficult to treat effectively with image restoration. One important class of turbulence mitigation algorithms is bispectral speckle imaging.6–16 This method seeks to recover the ideal image in the Fourier domain, by estimating the magnitude and phase spectrum separately. The magnitude spectrum is obtained with an inverse filter, or pseudoinverse filter, based on the LE optical transfer function (OTF). The phase is estimated using properties of the bispectrum.7,9,10,16 Another class of turbulence mitigation algorithms uses some form of dewarping, fusion, and then blind deconvolution.8,15–21 Other related methods can also be found in the literature.22–28 With most of these methods, a motion-compensated temporal average of video frames is computed first. The motion compensation, prior to temporal averaging, reduces the motion blurring that might otherwise be seen in LE imaging. In the case of a static scene, the true geometry can be revealed with a prototype image obtained with a sufficiently long temporal average (or LE). Input frames can be registered to the prototype to provide turbulent motion compensation. As we shall show, even global frame registration can be of benefit. Performing such registration with a dynamic scene, containing moving objects, presents additional challenges.24,26,29 The current paper limits its scope to static scenes. Fusion is often done next by simple temporal averaging. This reduces noise and averages the spatially and temporally varying speckle PSFs in the individual frames. The result is an image that appears to be blurred with a spatially invariant PSF (with less blurring than an LE PSF). A blind image restoration process is then used to jointly estimate the spatially invariant PSF and true image. Note that using blind deconvolution has its challenges. First, it can be very computationally demanding. Also, unless a significant amount of a priori knowledge is incorporated, the recovered PSF and image may not be accurate.30 Here, we present a block-matching and Wiener filtering (BMWF) approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We seek to leverage the rich theoretical work on atmospheric turbulence to aid in the design of a practical image restoration algorithm. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the current authors.31 These data provide objective truth and allow for a quantitative error analysis. The proposed turbulence mitigation method takes a sequence of SE frames of a static scene and outputs a single restored image. The images are globally registered to the temporal average and then reaveraged. This forms our prototype with the approximately correct geometry. A block-matching algorithm (BMA) is used to align the individual input frames to the prototype. We discuss how atmospheric statistics can help in setting the tuning parameters of the BMA. The BMA method here also uses a prefilter on the individual frames, so they better match the power spectrum of the prototype image for improved registration. The BMA registered frames are then averaged to generate a fused image. The final step is deconvolving the fused image using a Wiener filter. An important aspect of the proposed method lies in how we model the degradation PSF. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the same estimated motion vectors used for restoration. We provide a detailed performance analysis that illustrates how the key tuning parameters impact the BMWF system performance. The proposed BMWF method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study. The remainder of this paper is organized as follows. In Sec. 2, we present our observation model. This includes key statistics and the OTF models. The proposed BMWF turbulence mitigation approach is described in Sec. 3. The efficacy of the BMWF turbulence mitigation, in comparison with some benchmark methods, is demonstrated in Sec. 4. Finally, we offer conclusions in Sec. 5. 2.Optical Turbulence Modeling2.1.Atmospheric Turbulence StatisticsOne of the most important statistics that can be derived from the widely used Kolmogorov turbulence model is the atmospheric coherence diameter (or Fried parameter).3,32 This is given as where is the wavelength and is the refractive index structure parameter profile along the optical path. Note that this expression is for spherical wave propagation, and is the distance from the source (i.e., at the source and at the camera). As we will see, this parameter is central to the PSF model needed for deconvolution.Another very salient statistic is the tilt variance for a point source. This is the angle of arrival variance of a point source due to turbulence. An expression for the one-axis tilt variance, for the spherical wave case, is given as33 where is the aperture diameter and is measured in radians squared. Combining Eqs. (1) and (2) and converting the tilt variance into a spatial distance on the focal plane, we obtain the spatial-domain tilt standard deviation as where is the focal length and is measured in units of distance.2.2.Optical Transfer FunctionsWhen imaging in atmospheric turbulence, the overall camera OTF can be modeled to include the atmospheric OTF and the diffraction OTF. This is given as where and and are the spatial frequencies in units of cycles per unit distance. The atmospheric OTF model typically used is given as3 The parameter relates to the level of motion blur from tilt variance. More will be said about this shortly. The diffraction-limited OTF for a circular exit pupil is given as34 where is the optical cutoff frequency and the -number is . Let us define the LE transfer function, which includes diffraction, as Similarly, the SE transfer function is given as The above equation is the fully tilt-compensated and time averaged transfer function.3 An alternative SE OTF is given by Charnotskii.35With the two main transfer functions defined, we now highlight a very interesting and important relationship between them that comes from the original development of Eq. (5). That is, it can be shown that where is a Gaussian, given as This means that atmospheric OTF from Eq. (5) can be expressed as the SE OTF, multiplied by the Gaussian, , yielding In the spatial domain, the functions are also circularly symmetric, so we have where and * is the convolution operator.Based on the Fourier transform properties of a Gaussian, the spatial-domain function resulting from the inverse Fourier transform is also Gaussian. This is given as where Comparing the above equation with the theoretical tilt standard deviation in Eq. (3), we see that Alternatively, the standard deviations can be expressed as where , or equivalently, .Thus, Eq. (12) shows that the parametric atmospheric PSF, , is the SE PSF convolved by a Gaussian motion blur impulse response. When (or equivalently ), the Gaussian motion blur standard deviation is the theoretical tilt standard deviation in Eq. (3). That is, , and we get the LE PSF In the frequency domain, we have When the motion blur standard deviation is zero (i.e., or equivalently ), Eq. (12) gives the SE PSF. For , Eq. (12) gives us the SE PSF convolved with a Gaussian motion blur somewhere between full tilt compensation and no tilt compensation. In the literature, it is typically the LE OTF ( or ) that is used for image restoration. However, if some level of registration is applied to the SE images (even if only global image registration), prior to fusion and deconvolution, we show that better results can be achieved by tuning to the level tilt motion compensation. This gives us a powerful way to match the deconvolution step with the tilt correction processing step.Examples of atmospheric PSFs, , from Eq. (12) are shown in Fig. 1. The optical system parameters corresponding to these plots are from the simulated data used in Sec. 4. The specific parameters are listed in Table 1. Figure 1(a) is for () and Fig. 1(b) is for (). Note how the choice of has a very significant impact on the width of the PSF for both levels of turbulence. As is reduced, the Gaussian blurring component smoothes out and widens the PSF. We seek to match the (or equivalently ), used in our PSF model, to the level of tilt correction provided by the registration stage of processing. Table 1Optical parameters for simulated data.
2.3.Observation ModelGiven the information in the preceding subsections, we are now ready to define the model we use to relate observed SE frames to a truth image. Note that the model is similar to that described earlier by Fraser et al.17 The observed frames are expressed in terms of a spatially varying blur operator and a spatially varying geometric warping operator. In particular, observed frame is given as where , are spatial coordinates, is the temporal frame index, is the ideal image, and is an additive noise term. The geometric warping operator is defined such that where represents a temporal ensemble mean operator. The blurring operator is defined such that Using this model, note that the ensemble mean of the observed frames is given byNow, consider the case where perfect tilt correction is applied to the SE frames. Let this tilt correction operator be expressed as . Applying this to Eq. (19) and comparing this to Eq. (21), we get However, in practice, ideal tilt correction may not be possible. One reason for this is that BMA registration requires a finite size block for matching and the actual tilt warping varies continuously. Thus, any block-based estimate will tend to underestimate the true tilt for a given point, by virtue of the spatial averaging effect.36,37 Thus, we define a partial tilt correction operator as . Applying this to the SE frames, and applying an ensemble mean, yields This result gives the rational for using as the degradation blur model for fully or partially tilt corrected imagery. The value of can be selected based on the expected residual tilt variance after registration, . In this context, the variable , defined by Eq. (15), can be considered a registration tilt-variance reduction factor. Equivalently, the variable , defined by Eq. (16), can be considered a residual RMS tilt scaling factor.3.Turbulence Mitigation Approach3.1.Block-Matching and Wiener Filtering Turbulence MitigationA block diagram representing the proposed BMWF turbulence mitigation algorithm is provided in Fig. 2. The input is a set of SE frames , for . We assume that these frames are sampled such that they are free from aliasing. Treating turbulence and aliasing simultaneously has been explored in the literature25,26,38,39 but it is not addressed here. The input frames are buffered and averaged. Next, robust global translational registration is used to align the frames to the average. A least-squares gradient-based registration algorithm is used. This method is based on Lucas and Kanade40 but includes the robust multiscale processing described by Hardie et al.41,42 The frames are reaveraged after this global alignment to produce the prototype image with the desired geometry. This step also gives us the opportunity to compensate for any camera platform motion. For ground-based systems, translations may be sufficient. For airborne applications, affine registration at this stage may be appropriate.41 Next, a BMA algorithm43 is used to estimate the local motion vectors for each pixel within each frame. The images are then interpolated, based on the motion vectors, to match the geometry of the prototype. Note that there is a mismatch between the level of blurring in the raw frames and the protoype being matched. One of the features of our method is that we prefilter the raw frames, using the Gaussian tilt blur, from Eq. (13). Note that this is only done for the purposes of BMA, and we revert to the raw frames for subsequent processing. As discussed in Sec. 2.3, the registration will not be ideal, and the accuracy of the registration is quantified by the parameter (or equivalently ). The BMA registered frames are then expressed as , as shown in Fig. 2. Let us define the BMA block size as pixels, and let the search window be pixels in size (as defined by the position of the block centers). We use an exhaustive search within the search window, using the full rectangular blocks, and employ the mean absolute difference metric. We present results using a whole pixel search and subpixel search. The subpixel results are obtained by upsampling the images with bicubic interpolation. Because of its widespread use in image compression, much work has been done regarding performance analysis, speed enhancements, and hardware implementations of BMA.43 We leverage that work by incorporating the BMA here. The key parameters for BMA are and . If knowledge of the atmospheric coherence diameter, , is available, we can predict the amount of motion using Eq. (3). Exploiting this, we employ a search window that includes standard deviations, giving pixels, where is the pixel spacing measured on the focal plane. With regard to block size, the larger these are, the less sensitive the BMA is to noise and warping. However, with increased size, there tends to be an increased underestimation of the true local motion from atmospheric tilt.36,37 Thus, a balance is required. The exact amount of underestimation will depend on the block size, the particular profile, and optical parameters.36,37 Notwithstanding this, we have found that a fixed block size of is effective for the range of turbulence conditions used in the simulated imagery. Furthermore, our results show that the corresponding residual RMS tilt factor is approximately a constant in the simulated imagery. The next step of the BMWF method is to simply average the registered frames, as shown in Fig. 2. This gives rise to the result in Eq. (24). This step is important for two main reasons. First, it reduces noise and reduces the impact of any BMA errors. Second, by averaging the spatially varying blurring, it allows us to accurately model the resulting blur as spatially invariant,17 as shown in Eq. (24). This justifies the use of a spatially invariant deconvolution step. The deconvolution step is implemented here using a Wiener filter. The frequency response of the Wiener44 filter is given as where represents a constant noise-to-signal (NSR) power spectral density ratio. The output, after applying the Wiener filter, can be expressed as where and is given by Eq. (24). Note that and represent the Fourier and inverse Fourier transforms, respectively. In practice, we are using sampled images and the fast Fourier transform (FFT) for implementing Eq. (26). Since we are assuming Nyquist sampled images, the property of impulse invariance applies.45 The images are padded symmetrically to minimize ringing artifacts associated with the circular convolution that results from FFT products.Examples of the atmospheric OTF, , from Eq. (11), are shown in Fig. 3. The optical system parameters corresponding to these plots are listed in Table 1. Figure 3(a) is for (), and Fig. 3(b) is for (). Also shown in Fig. 3 are the degradation OTFs multiplied by the Wiener filter transfer function in Eq. (25) for (the value used for the simulated and real data with 200 frames). Clearly, as approaches 1 (equivalently approaches 0), the degradation OTF is more favorable to high spatial frequencies. The signal will be above the noise floor out to a higher spatial frequency. Consequently, the Wiener filter is able to provide gain out to a higher spatial frequency, without being overwhelmed with noise. This greatly extends the effective restored system OTF. When the degradation OTF value gets below the noise floor, governed by , the Wiener filter in Eq. (25) succumbs, as shown in Fig. 3. Note that with the illustrated NSR, the effective bandwidth of the sensor is nearly doubled, going from (no registration or tilt correction) to (full tilt correction). Matching the degradation model to the level of registration is essential to exploiting the full benefits of the registration. Another important thing to note from Fig. 3 is that the degradation OTF has no zeros up to the optical cutoff frequency. Thus, the blur degradation is theoretically invertible (barring noise and quantization). Given this, and the fact that we often can expect a low NSR because many frames are averaged, the computationally simple Wiener filter tends to give excellent performance. More complex deblurring methods may be warranted when additional ill-posed blurring functions and/or very high levels of noise are present. However, we observed negligible performance gains using more complex regularization-based image restoration methods in our experiments. 3.2.Estimating the Atmospheric Coherence DiameterTo define the degradation OTF in Eq. (11) and the corresponding Wiener filter in Eq. (25), we require the parameter . This can be measured using a scintillometer. However, in most practical imaging scenarios, it will not be known. Estimating this parameter from observed imagery is an active area of research.30,36,37 In some applications, it may be possible to set this parameter manually, based on subjective evaluation of the restoration results. Here, we propose a method for estimating from the BMA motion vectors used in the BMWF algorithm. Based on Eq. (3), it is clear that is directly related to the warping motion in the observed imagery. The BMA motion vectors can give us an estimate of the warping motion. However, note that it is important that we exclude any camera platform motion or within scene motion when doing this. If the residual RMS tilt is from Eq. (16), the reduction in tilt due to registration is . Let the BMA single-axis RMS motion, converted from pixels to distance on the focal plane, be denoted as . Now we can estimate as . Using this and Eq. (3), we obtain an estimate of the atmospheric coherence diameter as 4.Experimental ResultsIn this section, we present a number of experimental results. These include results for both simulated and real SE image sequences. We also provide a comparison with state-of-the art benchmark methods. One benchmark is the bispectral speckle imaging method.7,9,16 Our implementation uses apodization with tiles9 and incorporates local registration alignment with each tile, as this gave the best bispectrum method performance. Another benchmark is the method of Zhu and Milanfar.21 Our results for this method come from publicly available MATLAB® code, provided courtesy of the authors. They use a B-spline nonrigid image registration (NRIR) of SE frames. This is followed by temporal regression to produce what the authors refer to as a near-diffraction-limited (NDL) image. Zhu and Milanfar21 suggest that blind deconvolution be applied to the NDL image. However, blind deconvolution code is not provided by those authors. Here, we have exact knowledge of the diffraction PSF, and, therefore, we apply a parameter-optimized Wiener filter to deconvolve diffraction-only blur from the NDL image. We also compute the temporal average of the NRIR frames as an alternative (bypassing the NDL regression operation), as an additional comparison. 4.1.Simulated DataThe simulated data are generated using an anisoplanatic simulation tool described by Hardie et al.31 The optical parameters used are listed in Table 1, and the simulation parameters are listed in Table 2. Five different levels of turbulence are simulated, and some statistical parameters for these scenarios are listed in Table 3. Additive-independent Gaussian noise, with a standard deviation of digital units, is added to each simulated frame. The metric we use to evaluate the simulated data results is peak signal-to-noise-ratio (PSNR). Our first set of results is for temporally independent frames; then, we show results for frames. Table 2Simulation parameters used in generating simulated frames.31
Table 3Theoretical statistical parameters for the different levels of simulated atmospheric turbulence and related restoration parameters.
4.1.1.Results using 200 simulated framesThe PSNR results using 200 temporally independent frames are reported in Table 4. For the BMA algorithm, the search window size is set to . The block size is a constant . We report results for both whole pixel BMA and subpixel BMA in Table 4. We use the theoretical for each sequence for our OTF model. For the Wiener filter, we also report two sets of results. One where the optimum and are searched for and used and another where fixed parameters are employed. The fixed NSR is , where is the number of frames. The fixed residual tilt factors are: () for the Wiener filter applied to the raw frame average (i.e., LE PSF), () for the Wiener filter applied to the global registration average, and () for the Wiener filter applied to BMA registered average. It is interesting to see in Table 4 how the PSNR increases by incorporating different levels of registration before averaging. As might be expected, the highest PSNR values are obtained with the subpixel BMA registration. It is also clear that there is a big boost in performance by adding the Wiener filter. The best results in Table 4 are generally from the subpixel BMA + average + Wiener filter. Table 4PSNR (dB) results using 200 frames of simulated data with ση=2.0.
An analysis of the Wiener filter performance, as a function of and , is provided in Fig. 4 for and . Figure 4(a) shows the PSNR surface for the Wiener filter applied to the BMA registered frame average. Note that the optimum is near 0.1 and the optimum is near 0.0002. This suggests that the BMA registration is about 90% effective in eliminating the RMS tilt (and remains). A similar surface plot is provided in Fig. 4(b) for the globally registered frame average (i.e., no BMA). Here, the optimum is near 0.5, suggesting that the global registration is about 50% effective in eliminating the RMS tilt. Finally, Fig. 4(c) is for no registration at all. Here, the optimum is approaching 1.0, as would be expected for an LE image. This analysis shows that the parameter should be matched to the level of tilt correction provided by the registration. An analysis of the BMA parameters is shown in Fig. 5 for and . In particular, this plot shows the PSNR values as a function of and . Here, one can see that the optimum block size is near , and the optimum search window size does not increase after . It is clear that small block sizes give much lower PSNRs. This is likely due to an insufficient amount of information for accurate matching, given the atmospheric degradations. Also, one can see that larger search windows generally do not hurt performance, but they do add to the computational load. Figure 6 shows the system PSNR as a function of the number of input frames for . Performance increases dramatically for the first 30 frames or so and then becomes more incremental. However, additional frames continue to improve performance. Note that the curve is not monotonically increasing. The drops are likely due to the introduction of frames with large shifts, relative to the truth image. Estimation of the atmospheric coherence diameter is illustrated in Fig. 7. The continuous curve shows the relationship from Eq. (3). The five simulation turbulence levels are shown with blue circles. The red squares show the estimated parameters from Eq. (27), using the BMA motion vectors with the parameters in Table 3, and . This result appears to show a promising level of agreement between the estimates and true values. Let us now turn our attention to image results. The truth image is shown in Fig. 8. Several output images, formed using and , are shown in Fig. 9. Figure 9(a) shows a single raw frame. Figure 9(b) shows the temporal frame average with no registration. The subpixel BMA registered frame average is shown in Fig. 9(c). Finally, the subpixel BMA + average + Wiener filter output is shown in Fig. 9(d). Here, the fixed-parameter Wiener filter is used. Note that the temporal average in Fig. 9(b) is rather blurry, as it is effectively equivalent to the true image corrupted with the LE PSF. The BMA registered average has corrected geometry and a blur level that is comparable to the observed SE frames. We see that by matching the PSF to the BMA registered average excellent results are possible, as shown in Fig. 9(d). A similar set of results is shown in Fig. 10. These images are the same as Fig. 9, except we have increased the turbulence to . The raw SE frame is noticeably more corrupted than in Fig. 9. Also, the blurring in Figs. 10(b) and 10(c) is far more pronounced than in the corresponding images in Fig. 9. However, despite the increased turbulence, the subpixel BMA + average + Wiener output in Fig. 10(d) maintains much of the original detail. This is a consequence of having a very high signal-to-noise ratio, by virtue of the large number of input frames, and of an effective match between the PSF model and the blur in BMA registered average. 4.1.2.Results using 30 simulated framesThe next set of results is for the restoration methods using input frames. The quantitative PSNR results are shown in Table 5. The results are similar to those in Table 4, but as expected, with fewer frames the PSNR values drop somewhat. The best results in Table 5 are for the subpixel BMA + average + Wiener filter, and these results are significantly better than those of the benchmark methods. Table 5PSNR (dB) results using 30 frames of simulated data with ση=2.0.
To allow for a subjective comparison of the proposed method and the benchmark methods, output images from several methods are shown in Fig. 11 for the frame input, with . Figure 11(a) shows the temporal average, followed by the Wiener filter, using the LE PSF model. Figure 11(b) shows the NRIR + NDL image from Zhu and Milanfar,21 followed by the Wiener filter using the diffraction-only PSF model. Figure 11(c) shows the bispectral speckle image output.7,9,16 Finally, Fig. 11(d) shows the BMA + average + Wiener filter output, with subpixel BMA and fixed-parameter Wiener filter. The result in Fig. 11(a) is limited because no tilt correction is used, and the LE PSF is used in the Wiener filter. The NRIR + NDL + Wiener filter image in Fig. 11(b) provides improved results, but some areas remain highly blurred and there appear to be some artifacts at the edges. The bispectrum output in Fig. 11(c) also looks better than the LE restoration in (a) but is fundamentally limited by its use of the LE PSF model in recovering the magnitude frequency spectrum. The bispectrum method also tends to suffer from tiling artifacts when treating high levels of turbulence, as can be seen in Fig. 11(c). Processing without using tiles eliminates the tiling artifacts but leads to a lower quantitative performance (hence the use of tiles here). The subpixel BMA + average + Wiener filter output in Fig. 11(d) appears to have the best overall detail, with no major artifacts. This is supported by the quantitative analysis in Table 5. The processing time for the various algorithms and their components is provided in Table 6. Note that the proposed method has a significantly shorter run time than the benchmark methods using our MATLAB® implementations. However, run times with other implementations may differ. For the bispectral imaging method, processing time can be reduced by reducing the number of tiles and eliminating tile-based registration. Furthermore, hardware acceleration can be employed to speed up the multidimensional FFTs used with this method. Table 6Algorithm run times for processing 30 simulated 257×257 pixel frames to produce a single output frame. Processing was done with MATLAB® 2016a using a PC with Intel(R) Xeon(R) CPU E5-2620 v3 at 2.40 GHz, 16 GB RAM, and running Windows 10. For the BMA method S=11, B=15, and 3× subpixel BMA. 4.2.Real DataOur final set of experimental results uses a real-image sequence acquired from a tower to a truck and an engineering resolution target at a distance of 5 km. The resolution target is made up of a sequence of vertical and horizontal three-line groups. The five large groups on the right side have bars with the following widths: 7.00, 6.24, 5.56, 4.95, and 4.91 cm. The optical parameters for this sensor are listed in Table 7. The sensor sampling is very close to Nyquist, so the Wiener filter is evaluated and implemented at the pixel pitch of the sensor (i.e., no resampling of the imagery is performed). A scintillometer is used to provide an estimate of , as shown in Table 7. This value has been confirmed by analysis of an edge target, imaged within the larger field of view of the camera. Assuming a constant profile, note that the isoplanatic angle, when converted to pixels, is only 0.25 pixels. This gives rise to warping that is highly uncorrelated at a small scale. This makes BMA registration somewhat less effective than we saw in the simulated data. For this reason, we have chosen to use a residual RMS tilt factor of (compared with for the simulated data). An estimate of the atmospheric coherence diameter using Eq. (27), for , is shown in Table 7. Table 7Optical parameters for the real sensor data.
The image results using the real data are shown in Fig. 12. Figure 12(a) shows raw frame 1. The NRIR + NDL21 + Wiener filter output using input frames is shown in Fig. 12(b). The bispectrum output7,9,16 is shown in Fig. 12(c), also for and using the scintillometer . The 30 frame average + Wiener filter using the LE PSF with scintillometer is shown in Fig. 12(d). The subpixel BMA + average + Wiener output is shown for and in Figs. 12(e) and 12(f), respectively. Here, we use the estimated from the BMA, with and . Note that the results obtained using the scintillometer are very similar. The BMWF results appear to provide the best overall subjective quality and recover the resolution target lines notably better than the benchmark methods. We attribute this to a reduction in tilt blurring, by means of the BMA registration, and by the proper matching of the PSF model to the residual tilt blurring. 5.ConclusionsWe have presented a block-matching and Wiener filter-based approach to optical turbulence mitigation. In addition to the restoration method, we have also presented a method for estimating the atmospheric coherence diameter from the BMA motion vectors. We demonstrate the efficacy of this method quantitatively, using simulated data from a simulation tool developed by one of the authors. Results using real data are also provided for evaluation. The proposed restoration method utilizes a parametric OTF model for atmospheric turbulence and diffraction that incorporates the level of tilt correction provided by the registration step. By matching the PSF model to the level of registration, improved results are possible, as shown in Fig. 4. For the BMA component of our algorithm, we present a few innovations. For example, we use a search window size determined by the theoretical RMS tilt, when is available. The BMA also uses a prefilter on the raw frames, so they better match the prototype in spatial frequency content. Compared with benchmark methods, the proposed method provides the highest PSNR restorations in our study, as shown in Tables 4 and 5. We quantify the level of registration tilt correction by what we term the residual RMS tilt factor, , or equivalently, the tilt variance reduction factor, . Recall that is such that the residual RMS tilt, after registration, is , where is the theoretical uncorrected RMS tilt given in Eq. (3). Given and and the optical system parameters, the degradation OTF model is given by Eq. (11) and the Wiener filter is given by Eq. (25). We have demonstrated that can have a significant impact on the degradation OTF and corresponding restored image OTF, as shown in Figs. 1 and 3, respectively. In cases where is known, it is possible to infer from Eq. (27), using the BMA motion vectors. If one does not have knowledge of , it can be estimated from Eq. (27) using an assumed and the BMA motion vectors. Thus, a complete restoration can be achieved with the assumption of only one unknown parameter, (or ). Note that this parameter is linked to the size of the BMA block size , along with the camera parameters, and the profile. We achieved excellent results using a constant , assuming a corresponding for the simulated data and for the real data studied here. In practice, it may be possible to perform a search over and evaluate the results subjectively or by some other metric. AcknowledgmentsThe authors would like to thank Dr. Doug Droege at L-3 Communications Cincinnati Electronics for providing support for this project. We would like to also thank Matthew D. Howard at AFRL for assisting with data collection. This work has been supported in part by funding from L-3 Communications Cincinnati Electronics and under AFRL Award Nos. FA8650-10-2-7028 and FA9550-14-1-0244. It has been approved for public release (PA Approval # 88ABW-2016-4934). ReferencesR. E. Hufnagel and N. R. Stanley,
“Modulation transfer function associated with image transmission through turbulent media,”
J. Opt. Soc. Am., 54 52
–61
(1964). http://dx.doi.org/10.1364/JOSA.54.000052 JOSAAHJOSAAH 0030-3941 Google Scholar
D. L. Fried,
“Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,”
J. Opt. Soc. Am., 56 1372
(1966). http://dx.doi.org/10.1364/JOSA.56.001372 JOSAAH 0030-3941 Google Scholar
M. Roggemann and B. Welsh, Imaging Through Turbulence, CRC Press, Boca Raton, Florida
(1996). Google Scholar
R. Tyson, Principles of Adaptive Optics, Academic Press, Boston, Massachusetts
(1998). Google Scholar
A. W. M. van Eekeren et al.,
“Turbulence compensation: an overview,”
Proc. SPIE, 8355 83550Q
(2012). http://dx.doi.org/10.1117/12.918544 PSISDG 0277-786X Google Scholar
T. W. Lawrence et al.,
“Experimental validation of extended image reconstruction using bispectral speckle interferometry,”
Proc. SPIE, 1237 522
–537
(1990). http://dx.doi.org/10.1117/12.19323 PSISDG 0277-786X Google Scholar
T. W. Lawrence et al.,
“Extended-image reconstruction through horizontal path turbulence using bispectral speckle interferometry,”
Opt. Eng., 31
(3), 627
–636
(1992). http://dx.doi.org/10.1117/12.56083 Google Scholar
C. L. Matson et al.,
“Multiframe blind deconvolution and bispectrum processing of atmospherically degraded data: a comparison,”
Proc. SPIE, 4792 55
–66
(2002). http://dx.doi.org/10.1117/12.451796 PSISDG 0277-786X Google Scholar
C. J. Carrano,
“Speckle imaging over horizontal paths,”
Proc. SPIE, 4825 109
–120
(2002). http://dx.doi.org/10.1117/12.453519 PSISDG 0277-786X Google Scholar
C. J. Carrano,
“Anisoplanatic performance of horizontal-path speckle imaging,”
Proc. SPIE, 5162 14
–27
(2003). http://dx.doi.org/10.1117/12.508082 PSISDG 0277-786X Google Scholar
J. P. Bos and M. C. Roggemann,
“Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,”
Proc. SPIE, 8056 805603
(2011). http://dx.doi.org/10.1117/12.884093 PSISDG 0277-786X Google Scholar
J. P. Bos and M. C. Roggeman,
“The effect of free parameter estimates on the reconstruction of turbulence corrupted images using the bispectrum,”
Proc. SPIE, 8161 816105
(2011). http://dx.doi.org/10.1117/12.893859 PSISDG 0277-786X Google Scholar
J. P. Bos and M. C. Roggemann,
“Robustness of speckle-imaging techniques applied to horizontal imaging scenarios,”
Opt. Eng., 51
(8), 083201
(2012). http://dx.doi.org/10.1117/1.OE.51.8.083201 Google Scholar
J. P. Bos and M. C. Roggemann,
“Blind image quality metrics for optimal speckle image reconstruction in horizontal imaging scenarios,”
Opt. Eng., 51
(10), 107003
(2012). http://dx.doi.org/10.1117/1.OE.51.10.107003 Google Scholar
G. E. Archer, J. P. Bos and M. C. Roggemann,
“Reconstruction of long horizontal-path images under anisoplanatic conditions using multiframe blind deconvolution,”
Opt. Eng., 52
(8), 083108
(2013). http://dx.doi.org/10.1117/1.OE.52.8.083108 Google Scholar
G. E. Archer, J. P. Bos and M. C. Roggemann,
“Comparison of bispectrum, multiframe blind deconvolution and hybrid bispectrum-multiframe blind deconvolution image reconstruction techniques for anisoplanatic, long horizontal-path imaging,”
Opt. Eng., 53
(4), 043109
(2014). http://dx.doi.org/10.1117/1.OE.53.4.043109 Google Scholar
D. Fraser, G. Thorpe and A. Lambert,
“Atmospheric turbulence visualization with wide-area motion-blur restoration,”
J. Opt. Soc. Am. A, 16 1751
–1758
(1999). http://dx.doi.org/10.1364/JOSAA.16.001751 JOAOD6JOAOD6 0740-3232 Google Scholar
D. H. Frakes, J. W. Monaco and M. J. T. Smith,
“Suppression of atmospheric turbulence in video using an adaptive control grid interpolation approach,”
in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP 2001),
1881
–1884
(2001). Google Scholar
D. Li, R. M. Mersereau and S. Simske,
“Atmospheric turbulence-degraded image restoration using principal components analysis,”
IEEE Geosci. Remote Sens. Lett., 4 340
–344
(2007). http://dx.doi.org/10.1109/LGRS.2007.895691 Google Scholar
X. Zhu and P. Milanfar,
“Image reconstruction from videos distorted by atmospheric turbulence,”
Proc. SPIE, 7543 75430S
(2010). http://dx.doi.org/10.1117/12.840127 PSISDG 0277-786X Google Scholar
X. Zhu and P. Milanfar,
“Removing atmospheric turbulence via space-invariant deconvolution,”
IEEE Trans. Pattern Anal. Mach. Intell., 35 157
–170
(2013). http://dx.doi.org/10.1109/TPAMI.2012.82 ITPIDJITPIDJ 0162-8828 Google Scholar
R. C. Hardie and D. R. Droege,
“Atmospheric turbulence correction for infrared video,”
in Proc. of the Military Sensing Symposia (MSS), Passive Sensors,
(2009). Google Scholar
R. C. Hardie, D. R. Droege and K. M. Hardin,
“Real-time atmospheric turbulence correction for complex imaging conditions,”
in Proc. of the Military Sensing Symposia (MSS), Passive Sensors,
(2010). Google Scholar
R. C. Hardie, D. R. Droege and K. M. Hardin,
“Real-time atmospheric turbulence with moving objects,”
in Proc. of the Military Sensing Symposia (MSS), Passive Sensors,
(2011). Google Scholar
R. C. Hardie et al.,
“Real-time video processing for simultaneous atmospheric turbulence mitigation and super-resolution and its application to terrestrial and airborne infrared imaging,”
in Proc. of the Military Sensing Symposia (MSS), Passive Sensors,
(2012). Google Scholar
D. R. Droege et al.,
“A real-time atmospheric turbulence mitigation and super-resolution solution for infrared imaging systems,”
Proc. SPIE, 8355 83550R
(2012). http://dx.doi.org/10.1117/12.920323 PSISDG 0277-786X Google Scholar
A. W. M. van Eekeren et al.,
“Patch-based local turbulence compensation in anisoplanatic conditions,”
Proc. SPIE, 8355 83550T
(2012). http://dx.doi.org/10.1117/12.918545 PSISDG 0277-786X Google Scholar
A. W. M. van Eekeren, J. Dijk and K. Schutte,
“Turbulence mitigation methods and their evaluation,”
Proc. SPIE, 9249 92490O
(2014). http://dx.doi.org/10.1117/12.2067327 PSISDG 0277-786X Google Scholar
K. K. Halder, M. Tahtali and S. G. Anavatti,
“Geometric correction of atmospheric turbulence-degraded video containing moving objects,”
Opt. Express, 23 5091
–5101
(2015). http://dx.doi.org/10.1364/OE.23.005091 OPEXFFOPEXFF 1094-4087 Google Scholar
F. Molina-Martel, R. Baena-Gall and S. Gladysz,
“Fast PSF estimation under anisoplanatic conditions,”
Proc. SPIE, 9641 96410I
(2015). http://dx.doi.org/10.1117/12.2194570 PSISDG 0277-786X Google Scholar
R. C. Hardie et al.,
“Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation,”
Opt. Eng.,
(2017). Google Scholar
J. D. Schmidt, Numerical Simulation of Optical Wave Propagation with Examples in MATLAB, SPIE Press, Bellingham, Washington
(2010). Google Scholar
F. G. Smith, The Infrared and Electro-Optical Systems Handbook: Volume 2 Atmospheric Propagation of Radiation, SPIE Press, Bellingham, Washington
(1993). Google Scholar
J. W. Goodman, Introduction to Fourier Optics, 3rd ed.Roberts and Company Publishers, Greenwood Village, Colorado
(2004). Google Scholar
M. I. Charnotskii,
“Anisoplanatic short-exposure imaging in turbulence,”
J. Opt. Soc. Am. A, 10 492
–501
(1993). http://dx.doi.org/10.1364/JOSAA.10.000492 JOAOD6JOAOD6 0740-3232 Google Scholar
S. Basu, J. E. McCrae and S. T. Fiorino,
“Estimation of the path-averaged atmospheric refractive index structure constant from time-lapse imagery,”
Proc. SPIE, 9465 94650T
(2015). http://dx.doi.org/10.1117/12.2177330 PSISDG 0277-786X Google Scholar
J. E. McCrae, S. Basu and S. T. Fiorino,
“Estimation of atmospheric parameters from time-lapse imagery,”
Proc. SPIE, 9833 983303
(2016). http://dx.doi.org/10.1117/12.2223986 PSISDG 0277-786X Google Scholar
L. Yaroslavsky et al.,
“Superresolution in turbulent videos: making profit from damage,”
Opt. Lett., 32 3038
–3040
(2007). http://dx.doi.org/10.1364/OL.32.003038 OPLEDPOPLEDP 0146-9592 Google Scholar
L. P. Yaroslavsky et al.,
“Super-resolution of turbulent video: potentials and limitations,”
Proc. SPIE, 6812 681205
(2008). http://dx.doi.org/10.1117/12.765580 PSISDG 0277-786X Google Scholar
B. D. Lucas and T. Kanade,
“An iterative image registration technique with an application to stereo vision,”
in Int. Joint Conf. on Artificial Intelligence,
(1981). Google Scholar
R. C. Hardie, K. J. Barnard and R. Ordonez,
“Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging,”
Opt. Express, 19 26208
–26231
(2011). http://dx.doi.org/10.1364/OE.19.026208 OPEXFFOPEXFF 1094-4087 Google Scholar
R. C. Hardie and K. J. Barnard,
“Fast super-resolution using an adaptive Wiener filter with robustness to local motion,”
Opt. Express, 20 21053
–21073
(2012). http://dx.doi.org/10.1364/OE.20.021053 OPEXFFOPEXFF 1094-4087 Google Scholar
Y.-W. Huang et al.,
“Survey on block matching motion estimation algorithms and architectures with new results,”
J. VLSI Signal Process. Syst., 42 297
–320
(2006). http://dx.doi.org/10.1007/s11265-006-4190-4 Google Scholar
R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed.Prentice-Hall, Inc., Upper Saddle River, New Jersey
(2006). Google Scholar
A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, 3rd ed.Prentice-Hall, Inc., Upper Saddle River, New Jersey
(2010). Google Scholar
BiographyRussell C. Hardie is a full professor in the Department of Electrical and Computer Engineering, the University of Dayton, with a joint appointment in the Department of Electro-Optics. He received the University of Dayton’s top university-wide teaching award, the 2006 Alumni Award in teaching. He received the Rudolf Kingslake Medal and Prize from SPIE in 1998 for super-resolution research. In 1999, he received the School of Engineering Award of Excellence in teaching and the first annual Professor of the Year Award in 2002 from the student chapter of the IEEE. Michael A. Rucci is a research engineer at the Air Force Research Laboratory, Wright-Patterson AFB, Ohio. His current research includes day/night passive imaging, turbulence modeling and simulation, and image processing. He received his MS and BS degrees in electrical engineering from the University of Dayton in 2014 and 2012, respectively. Alexander J. Dapore is a senior image processing engineer at L-3 Cincinnati Electronics. He received his BSEE and MSEE from the University of Illinois at Urbana–Champaign in 2008 and 2010, respectively. He has worked on research and development projects in many areas of digital image processing. His specific areas of interest are image restoration, image enhancement, object/threat detection and tracking, multiview computer vision, and the real-time implementation of digital image processing algorithms on image processing algorithms using general-purpose computing on graphics processing units. Barry K. Karch is a principal research electronics engineer at the Multispectral Sensing and Detection Division, Sensors Directorate of the Air Force Research Laboratory, Wright-Patterson AFB, Ohio. He received his BS degree in electrical engineering in 1987, his MS degree in electro-optics and electrical engineering in 1992 and 1994, respectively, and his PhD in electrical engineering in 2015 from the University of Dayton, Dayton, Ohio. He has worked in the areas of EO/IR remote sensor system and processing development for 29 years. |