Open Access
6 February 2017 Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis
Russell C. Hardie, Michael A. Rucci, Alexander J. Dapore, Barry K. Karch
Author Affiliations +
Abstract
We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

1.

Introduction

When imaging at long ranges through the atmosphere, the acquired images are highly susceptible to degradations from atmospheric turbulence.13 Fluctuations in the index of refraction along the optical path length, driven by temperature and pressure variations, give rise to spatially and temporally varying blur and warping. This is a well-researched area with respect to astronomical imaging.3 In astronomical imaging, with narrow fields of view, the degradation caused by the atmosphere can usually be modeled as isoplanatic. That is, the atmospheric effects are uniform across the image. This gives rise to warping that is a global image shift, and the blurring can be modeled with a spatially invariant point spread function (PSF). Wide field-of-view imaging, at long ranges through the atmosphere, generally leads to anisoplanatic imaging conditions. Here, the atmospheric PSF varies significantly across the field of view of the imaging sensor.

Adaptive optics has proven to be effective in treating the isoplanatic problem.4 However, for the anisoplanatic problem, mitigation methods are generally based on acquiring and digitally processing a sequence of short-exposure (SE) images.5 Short integration time means that warping during integration is minimized, reducing a major source of blurring. However, under anisoplanatic imaging conditions, there is still temporally and spatially varying warping and blurring to contend with. Note that with long-exposure (LE) images, turbulence-induced image warping gets temporally integrated. While this has the advantage of “averaging out” the geometric warping, to reveal the correct scene geometry, it also leads to high levels of blurring that may be difficult to treat effectively with image restoration.

One important class of turbulence mitigation algorithms is bispectral speckle imaging.616 This method seeks to recover the ideal image in the Fourier domain, by estimating the magnitude and phase spectrum separately. The magnitude spectrum is obtained with an inverse filter, or pseudoinverse filter, based on the LE optical transfer function (OTF). The phase is estimated using properties of the bispectrum.7,9,10,16

Another class of turbulence mitigation algorithms uses some form of dewarping, fusion, and then blind deconvolution.8,1521 Other related methods can also be found in the literature.2228 With most of these methods, a motion-compensated temporal average of video frames is computed first. The motion compensation, prior to temporal averaging, reduces the motion blurring that might otherwise be seen in LE imaging. In the case of a static scene, the true geometry can be revealed with a prototype image obtained with a sufficiently long temporal average (or LE). Input frames can be registered to the prototype to provide turbulent motion compensation. As we shall show, even global frame registration can be of benefit. Performing such registration with a dynamic scene, containing moving objects, presents additional challenges.24,26,29 The current paper limits its scope to static scenes. Fusion is often done next by simple temporal averaging. This reduces noise and averages the spatially and temporally varying speckle PSFs in the individual frames. The result is an image that appears to be blurred with a spatially invariant PSF (with less blurring than an LE PSF). A blind image restoration process is then used to jointly estimate the spatially invariant PSF and true image. Note that using blind deconvolution has its challenges. First, it can be very computationally demanding. Also, unless a significant amount of a priori knowledge is incorporated, the recovered PSF and image may not be accurate.30

Here, we present a block-matching and Wiener filtering (BMWF) approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We seek to leverage the rich theoretical work on atmospheric turbulence to aid in the design of a practical image restoration algorithm. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the current authors.31 These data provide objective truth and allow for a quantitative error analysis. The proposed turbulence mitigation method takes a sequence of SE frames of a static scene and outputs a single restored image. The images are globally registered to the temporal average and then reaveraged. This forms our prototype with the approximately correct geometry. A block-matching algorithm (BMA) is used to align the individual input frames to the prototype. We discuss how atmospheric statistics can help in setting the tuning parameters of the BMA. The BMA method here also uses a prefilter on the individual frames, so they better match the power spectrum of the prototype image for improved registration. The BMA registered frames are then averaged to generate a fused image. The final step is deconvolving the fused image using a Wiener filter.

An important aspect of the proposed method lies in how we model the degradation PSF. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the same estimated motion vectors used for restoration. We provide a detailed performance analysis that illustrates how the key tuning parameters impact the BMWF system performance. The proposed BMWF method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

The remainder of this paper is organized as follows. In Sec. 2, we present our observation model. This includes key statistics and the OTF models. The proposed BMWF turbulence mitigation approach is described in Sec. 3. The efficacy of the BMWF turbulence mitigation, in comparison with some benchmark methods, is demonstrated in Sec. 4. Finally, we offer conclusions in Sec. 5.

2.

Optical Turbulence Modeling

2.1.

Atmospheric Turbulence Statistics

One of the most important statistics that can be derived from the widely used Kolmogorov turbulence model is the atmospheric coherence diameter (or Fried parameter).3,32 This is given as

Eq. (1)

r0=[0.423(2πλ)2z=0z=LCn2(z)(zL)5/3dz]3/5,
where λ is the wavelength and Cn2(z) is the refractive index structure parameter profile along the optical path. Note that this expression is for spherical wave propagation, and z is the distance from the source (i.e., z=0 at the source and z=L at the camera). As we will see, this parameter is central to the PSF model needed for deconvolution.

Another very salient statistic is the tilt variance for a point source. This is the angle of arrival variance of a point source due to turbulence. An expression for the one-axis tilt variance, for the spherical wave case, is given as33

Eq. (2)

σφ2=2.91D1/3z=0LCn2(z)(zL)5/3dz,
where D is the aperture diameter and σφ2 is measured in radians squared. Combining Eqs. (1) and (2) and converting the tilt variance into a spatial distance on the focal plane, we obtain the spatial-domain tilt standard deviation as

Eq. (3)

σr=.4175λlr05/6D1/6,
where l is the focal length and σr is measured in units of distance.

2.2.

Optical Transfer Functions

When imaging in atmospheric turbulence, the overall camera OTF can be modeled to include the atmospheric OTF and the diffraction OTF. This is given as

Eq. (4)

Hα(ρ)=Hatm,α(ρ)Hdif(ρ),
where ρ=u2+v2 and u and v are the spatial frequencies in units of cycles per unit distance. The atmospheric OTF model typically used is given as3

Eq. (5)

Hatm,α(ρ)=exp{3.44(λlρr0)5/3[1α(λlρD)1/3]}.
The parameter α relates to the level of motion blur from tilt variance. More will be said about this shortly. The diffraction-limited OTF for a circular exit pupil is given as34

Eq. (6)

Hdif(ρ)={2π[cos1(ρ2ρc)ρ2ρc1(ρ2ρc)2]ρρc0otherwise,
where ρc=1/(λf/#) is the optical cutoff frequency and the f-number is f/#=l/D. Let us define the LE transfer function, which includes diffraction, as

Eq. (7)

HLE(ρ)=H0(ρ)=Hatm,0(ρ)Hdif(ρ).
Similarly, the SE transfer function is given as

Eq. (8)

HSE(ρ)=H1(ρ)=Hatm,1(ρ)Hdif(ρ).
The above equation is the fully tilt-compensated and time averaged transfer function.3 An alternative SE OTF is given by Charnotskii.35

With the two main transfer functions defined, we now highlight a very interesting and important relationship between them that comes from the original development of Eq. (5). That is, it can be shown that

Eq. (9)

Hatm,α(ρ)=Hatm,1(ρ)Gα(ρ),
where Gα(ρ) is a Gaussian, given as

Eq. (10)

Gα(ρ)=exp{(1α)3.44(λl)2r05/3D1/3ρ2}.
This means that atmospheric OTF from Eq. (5) can be expressed as the SE OTF, multiplied by the Gaussian, Gα(ρ), yielding

Eq. (11)

Hα(ρ)=Hatm,1(ρ)Gα(ρ)Hdif(ρ)=HSE(ρ)Gα(ρ).
In the spatial domain, the functions are also circularly symmetric, so we have

Eq. (12)

hα(r)=hSE(r)*gα(r),
where r=x2+y2 and * is the convolution operator.

Based on the Fourier transform properties of a Gaussian, the spatial-domain function resulting from the inverse Fourier transform gα(r)=FT1[Gα(ρ)] is also Gaussian. This is given as

Eq. (13)

gα(r)=2σg2(α)πexp{r22σg2(α)},
where

Eq. (14)

σg2(α)=(1α)(.4175λlr05/6D1/6)2.
Comparing the above equation with the theoretical tilt standard deviation in Eq. (3), we see that

Eq. (15)

σg2(α)=(1α)σr2.
Alternatively, the standard deviations can be expressed as

Eq. (16)

σg(α)=1ασr=βσr,
where β=1α, or equivalently, α=1β2.

Thus, Eq. (12) shows that the parametric atmospheric PSF, hα(r), is the SE PSF convolved by a Gaussian motion blur impulse response. When α=0 (or equivalently β=1), the Gaussian motion blur standard deviation is the theoretical tilt standard deviation in Eq. (3). That is, σg(α)=σr, and we get the LE PSF

Eq. (17)

hLE(r)=hSE(r)*g0(r).
In the frequency domain, we have

Eq. (18)

HLE(ρ)=HSE(ρ)G0(ρ).
When the motion blur standard deviation is zero (i.e., α=1 or equivalently β=0), Eq. (12) gives the SE PSF. For 0<α<1, Eq. (12) gives us the SE PSF convolved with a Gaussian motion blur somewhere between full tilt compensation and no tilt compensation. In the literature, it is typically the LE OTF (α=0 or β=1) that is used for image restoration. However, if some level of registration is applied to the SE images (even if only global image registration), prior to fusion and deconvolution, we show that better results can be achieved by tuning α to the level tilt motion compensation. This gives us a powerful way to match the deconvolution step with the tilt correction processing step.

Examples of atmospheric PSFs, hα(r), from Eq. (12) are shown in Fig. 1. The optical system parameters corresponding to these plots are from the simulated data used in Sec. 4. The specific parameters are listed in Table 1. Figure 1(a) is for Cn2=0.25×1015  m2/3 (r0=0.1097  m) and Fig. 1(b) is for Cn2=1.00×1015  m2/3 (r0=0.0478  m). Note how the choice of α has a very significant impact on the width of the PSF for both levels of turbulence. As α is reduced, the Gaussian blurring component smoothes out and widens the PSF. We seek to match the α (or equivalently β), used in our PSF model, to the level of tilt correction provided by the registration stage of processing.

Fig. 1

Overall system PSFs with different α for the optical system parameters in Table 1 with (a) Cn2=0.25×1015  m2/3 (r0=0.1097  m) and (b) Cn2=1.00×1015  m2/3 (r0=0.0478  m). Note that as alpha increases, the PSF narrows.

OE_56_7_071503_f001.png

Table 1

Optical parameters for simulated data.

ParameterValue
ApertureD=0.2034  m
Focal lengthl=1.2  m
f-numberf/#=5.9  m
Wavelengthλ=0.525  μm
Spatial cutoff frequencyρc=322.8410  cyc/mm
Object distanceL=7  km
Nyquist pixel spacing (focal plane)δN=1.5488  μm

2.3.

Observation Model

Given the information in the preceding subsections, we are now ready to define the model we use to relate observed SE frames to a truth image. Note that the model is similar to that described earlier by Fraser et al.17 The observed frames are expressed in terms of a spatially varying blur operator and a spatially varying geometric warping operator. In particular, observed frame k is given as

Eq. (19)

fk(x,y)=s˜k(x,y){h˜k(x,y)[z(x,y)]}+ηk(x,y),
where x, y are spatial coordinates, k is the temporal frame index, z(x,y) is the ideal image, and ηk(x,y) is an additive noise term. The geometric warping operator is defined such that

Eq. (20)

s˜k(x,y)[z(x,y)]=g0(x,y)*z(x,y),
where · represents a temporal ensemble mean operator. The blurring operator is defined such that

Eq. (21)

h˜k(x,y)[z(x,y)]=hSE(x,y)*z(x,y).
Using this model, note that the ensemble mean of the observed frames is given by

Eq. (22)

fk(x,y)=g0(x,y)*hSE(x,y)*z(x,y)=hLE(x,y)*z(x,y).

Now, consider the case where perfect tilt correction is applied to the SE frames. Let this tilt correction operator be expressed as s˜k1(x,y)[·]. Applying this to Eq. (19) and comparing this to Eq. (21), we get

Eq. (23)

s˜k1(x,y)[fk(x,y)]=h˜k(x,y)[z(x,y)]=hSE(x,y)*z(x,y).
However, in practice, ideal tilt correction may not be possible. One reason for this is that BMA registration requires a finite size block for matching and the actual tilt warping varies continuously. Thus, any block-based estimate will tend to underestimate the true tilt for a given point, by virtue of the spatial averaging effect.36,37 Thus, we define a partial tilt correction operator as s˜k,α1(x,y)[·]. Applying this to the SE frames, and applying an ensemble mean, yields

Eq. (24)

f(x,y)=s˜k,α1(x,y)[fk(x,y)]=gα(x,y)*hSE(x,y)*z(x,y)=hα(x,y)*z(x,y).
This result gives the rational for using hα(x,y) as the degradation blur model for fully or partially tilt corrected imagery. The value of α can be selected based on the expected residual tilt variance after registration, σg2(α). In this context, the variable α, defined by Eq. (15), can be considered a registration tilt-variance reduction factor. Equivalently, the variable β, defined by Eq. (16), can be considered a residual RMS tilt scaling factor.

3.

Turbulence Mitigation Approach

3.1.

Block-Matching and Wiener Filtering Turbulence Mitigation

A block diagram representing the proposed BMWF turbulence mitigation algorithm is provided in Fig. 2. The input is a set of N SE frames fk(x,y), for k=1,2,,N. We assume that these frames are sampled such that they are free from aliasing. Treating turbulence and aliasing simultaneously has been explored in the literature25,26,38,39 but it is not addressed here.

Fig. 2

Proposed BMWF turbulence mitigation system block diagram.

OE_56_7_071503_f002.png

The input frames are buffered and averaged. Next, robust global translational registration is used to align the N frames to the average. A least-squares gradient-based registration algorithm is used. This method is based on Lucas and Kanade40 but includes the robust multiscale processing described by Hardie et al.41,42 The frames are reaveraged after this global alignment to produce the prototype image with the desired geometry. This step also gives us the opportunity to compensate for any camera platform motion. For ground-based systems, translations may be sufficient. For airborne applications, affine registration at this stage may be appropriate.41

Next, a BMA algorithm43 is used to estimate the local motion vectors for each pixel within each frame. The images are then interpolated, based on the motion vectors, to match the geometry of the prototype. Note that there is a mismatch between the level of blurring in the raw frames and the protoype being matched. One of the features of our method is that we prefilter the raw frames, using the Gaussian tilt blur, g0.5(x,y) from Eq. (13). Note that this is only done for the purposes of BMA, and we revert to the raw frames for subsequent processing.

As discussed in Sec. 2.3, the registration will not be ideal, and the accuracy of the registration is quantified by the parameter α (or equivalently β). The BMA registered frames are then expressed as s˜k,α1(x,y)[fk(x,y)], as shown in Fig. 2. Let us define the BMA block size as B×B pixels, and let the search window be S×S pixels in size (as defined by the position of the block centers). We use an exhaustive search within the search window, using the full rectangular blocks, and employ the mean absolute difference metric. We present results using a whole pixel search and subpixel search. The subpixel results are obtained by upsampling the images with bicubic interpolation. Because of its widespread use in image compression, much work has been done regarding performance analysis, speed enhancements, and hardware implementations of BMA.43 We leverage that work by incorporating the BMA here. The key parameters for BMA are B and S. If knowledge of the atmospheric coherence diameter, r0, is available, we can predict the amount of motion using Eq. (3). Exploiting this, we employ a search window that includes ±2 standard deviations, giving S=22σr/δN+1 pixels, where δN is the pixel spacing measured on the focal plane.

With regard to block size, the larger these are, the less sensitive the BMA is to noise and warping. However, with increased size, there tends to be an increased underestimation of the true local motion from atmospheric tilt.36,37 Thus, a balance is required. The exact amount of underestimation will depend on the block size, the particular Cn2(z) profile, and optical parameters.36,37 Notwithstanding this, we have found that a fixed block size of B=15  pixels is effective for the range of turbulence conditions used in the simulated imagery. Furthermore, our results show that the corresponding residual RMS tilt factor is approximately a constant β=0.1 in the simulated imagery.

The next step of the BMWF method is to simply average the registered frames, as shown in Fig. 2. This gives rise to the result in Eq. (24). This step is important for two main reasons. First, it reduces noise and reduces the impact of any BMA errors. Second, by averaging the spatially varying blurring, it allows us to accurately model the resulting blur as spatially invariant,17 as shown in Eq. (24). This justifies the use of a spatially invariant deconvolution step. The deconvolution step is implemented here using a Wiener filter. The frequency response of the Wiener44 filter is given as

Eq. (25)

HW(u,v)=Hα(u,v)*|Hα(u,v)|2+Γ,
where Γ represents a constant noise-to-signal (NSR) power spectral density ratio. The output, after applying the Wiener filter, can be expressed as

Eq. (26)

z^(x,y)=FT1[HW(u,v)F(u,v)],
where F(u,v)=FT{f(x,y)} and f(x,y) is given by Eq. (24). Note that FT(·) and FT1(·) represent the Fourier and inverse Fourier transforms, respectively. In practice, we are using sampled images and the fast Fourier transform (FFT) for implementing Eq. (26). Since we are assuming Nyquist sampled images, the property of impulse invariance applies.45 The images are padded symmetrically to minimize ringing artifacts associated with the circular convolution that results from FFT products.

Examples of the atmospheric OTF, Hα(ρ), from Eq. (11), are shown in Fig. 3. The optical system parameters corresponding to these plots are listed in Table 1. Figure 3(a) is for Cn2=0.25×1015  m2/3 (r0=0.1097  m), and Fig. 3(b) is for Cn2=1.00×1015  m2/3 (r0=0.0478  m). Also shown in Fig. 3 are the degradation OTFs multiplied by the Wiener filter transfer function in Eq. (25) for Γ=0.0002 (the value used for the simulated and real data with 200 frames). Clearly, as α approaches 1 (equivalently β approaches 0), the degradation OTF is more favorable to high spatial frequencies. The signal will be above the noise floor out to a higher spatial frequency. Consequently, the Wiener filter is able to provide gain out to a higher spatial frequency, without being overwhelmed with noise. This greatly extends the effective restored system OTF. When the degradation OTF value gets below the noise floor, governed by Γ, the Wiener filter in Eq. (25) succumbs, as shown in Fig. 3. Note that with the illustrated NSR, the effective bandwidth of the sensor is nearly doubled, going from α=0 (no registration or tilt correction) to α=1 (full tilt correction). Matching the degradation model to the level of registration is essential to exploiting the full benefits of the registration.

Fig. 3

Overall system OTFs with different α for the optical system parameters in Table 1 with (a) Cn2=0.25×1015  m2/3 (r0=0.1097  m) and (b) Cn2=1.00×1015  m2/3 (r0=0.0478  m). The system OTFs multiplied by the Wiener filter frequency responses with Γ=0.0002 are also shown to illustrate the effective OTF with restoration. Note that as alpha increases, the OTF widens.

OE_56_7_071503_f003.png

Another important thing to note from Fig. 3 is that the degradation OTF has no zeros up to the optical cutoff frequency. Thus, the blur degradation is theoretically invertible (barring noise and quantization). Given this, and the fact that we often can expect a low NSR because many frames are averaged, the computationally simple Wiener filter tends to give excellent performance. More complex deblurring methods may be warranted when additional ill-posed blurring functions and/or very high levels of noise are present. However, we observed negligible performance gains using more complex regularization-based image restoration methods in our experiments.

3.2.

Estimating the Atmospheric Coherence Diameter

To define the degradation OTF in Eq. (11) and the corresponding Wiener filter in Eq. (25), we require the parameter r0. This can be measured using a scintillometer. However, in most practical imaging scenarios, it will not be known. Estimating this parameter from observed imagery is an active area of research.30,36,37 In some applications, it may be possible to set this parameter manually, based on subjective evaluation of the restoration results.

Here, we propose a method for estimating r0 from the BMA motion vectors used in the BMWF algorithm. Based on Eq. (3), it is clear that r0 is directly related to the warping motion in the observed imagery. The BMA motion vectors can give us an estimate of the warping motion. However, note that it is important that we exclude any camera platform motion or within scene motion when doing this. If the residual RMS tilt is βσr from Eq. (16), the reduction in tilt due to registration is (1β)σr. Let the BMA single-axis RMS motion, converted from pixels to distance on the focal plane, be denoted as σBMA. Now we can estimate σr as σ^r=σBMA/(1β). Using this and Eq. (3), we obtain an estimate of the atmospheric coherence diameter as

Eq. (27)

r^0=[.4175(1β)λlσBMAD1/6]6/5.

4.

Experimental Results

In this section, we present a number of experimental results. These include results for both simulated and real SE image sequences. We also provide a comparison with state-of-the art benchmark methods. One benchmark is the bispectral speckle imaging method.7,9,16 Our implementation uses apodization with 16×16  pixel tiles9 and incorporates local registration alignment with each tile, as this gave the best bispectrum method performance. Another benchmark is the method of Zhu and Milanfar.21 Our results for this method come from publicly available MATLAB® code, provided courtesy of the authors. They use a B-spline nonrigid image registration (NRIR) of SE frames. This is followed by temporal regression to produce what the authors refer to as a near-diffraction-limited (NDL) image. Zhu and Milanfar21 suggest that blind deconvolution be applied to the NDL image. However, blind deconvolution code is not provided by those authors. Here, we have exact knowledge of the diffraction PSF, and, therefore, we apply a parameter-optimized Wiener filter to deconvolve diffraction-only blur from the NDL image. We also compute the temporal average of the NRIR frames as an alternative (bypassing the NDL regression operation), as an additional comparison.

4.1.

Simulated Data

The simulated data are generated using an anisoplanatic simulation tool described by Hardie et al.31 The optical parameters used are listed in Table 1, and the simulation parameters are listed in Table 2. Five different levels of turbulence are simulated, and some statistical parameters for these scenarios are listed in Table 3. Additive-independent Gaussian noise, with a standard deviation of ση=2 digital units, is added to each simulated frame. The metric we use to evaluate the simulated data results is peak signal-to-noise-ratio (PSNR). Our first set of results is for N=200 temporally independent frames; then, we show results for N=30 frames.

Table 2

Simulation parameters used in generating simulated frames.31

ParameterValue
Path lengthL=7  km
Propagation stepΔz=700  m
Cropped screen samplesN=256
Propagation screen widthX=0.9699  m
Propagation sample spacingΔx=0.0038  m
Number of phase screensN=10 (9 nonzero)
Phase screen typeModified von Kármán with subharmonics
Inner scalel0=0.01  m
Outer scaleL0=300  m
Image size (object plane)2.3218×2.3218  m
Image size (pixels)257×257  pixels
Pixel skip4 pixels (65×65 PSF array)
Input image dynamic range256 digital units
Gaussian noise variance4 digital units

Table 3

Theoretical statistical parameters for the different levels of simulated atmospheric turbulence and related restoration parameters.

Cn2×1015 (m−2/3)
Parameter0.100.250.501.001.50
Isoplanatic angle θ0 (Pixels)6.623.822.521.661.30
BMA search size S (Pixels)3.005.007.0011.0013.00
BMA block size B (Pixels)15.0015.0015.0015.0015.00
Theoretical RMS G-tilt σr (Pixels)0.88311.39621.97462.79253.4201
Estimated RMS G-tilt σ^r (Pixels)0.82691.37811.96012.90223.5321
Theoretical r0(m)0.19020.10970.07240.04780.0375
Estimated r^0(m)0.20580.11150.07300.04560.0360

4.1.1.

Results using 200 simulated frames

The PSNR results using 200 temporally independent frames are reported in Table 4. For the BMA algorithm, the search window size is set to S=22σr/δN+1  pixels. The block size is a constant B=15  pixels. We report results for both whole pixel BMA and subpixel BMA in Table 4. We use the theoretical r0 for each sequence for our OTF model. For the Wiener filter, we also report two sets of results. One where the optimum Γ and β are searched for and used and another where fixed parameters are employed. The fixed NSR is Γ=ση2/(100N), where N is the number of frames. The fixed residual tilt factors are: β=1.0 (α=0.00) for the Wiener filter applied to the raw frame average (i.e., LE PSF), β=0.5 (α=0.75) for the Wiener filter applied to the global registration average, and β=0.1 (α=0.99) for the Wiener filter applied to BMA registered average. It is interesting to see in Table 4 how the PSNR increases by incorporating different levels of registration before averaging. As might be expected, the highest PSNR values are obtained with the subpixel BMA registration. It is also clear that there is a big boost in performance by adding the Wiener filter. The best results in Table 4 are generally from the subpixel BMA + average + Wiener filter.

Table 4

PSNR (dB) results using 200 frames of simulated data with ση=2.0.

Cn2×1015 (m−2/3)
Method0.100.250.501.001.50
Raw frames22.9421.2420.0919.1118.60
Average24.1022.4421.1820.0419.44
Global registration + average25.0223.5322.1720.8320.12
BMA + average25.4624.3623.0621.5720.74
BMA (sub) + average25.5724.4523.1321.6120.77
NDL image2125.2524.2623.1421.7220.73
NRIR21 + average25.2424.1122.7621.1320.22
Average + Wiener (optimized)31.2126.8724.5122.6121.52
Average + Wiener (Γ=0.0002, β=1.0)30.4926.6024.2722.4421.31
Global registration + average + Wiener (optimized)33.9430.7627.6824.9423.55
Global registration + average + Wiener (Γ=0.0002, β=0.5)33.8230.7627.6624.9223.53
BMA + average + Wiener (optimized)34.2532.9631.3328.4125.98
BMA + average + Wiener (Γ=0.0002, β=0.1)34.1932.9131.3128.4125.97
BMA (sub) + average + Wiener (optimized)34.1233.2231.6928.7426.16
BMA (sub) + average + Wiener (Γ=0.0002, β=0.1)34.1233.2231.6928.7226.12
NDL21 + Wiener-diffraction (optimized)29.5126.8824.5122.2620.99
NRIR21 + average + Wiener-diffraction (optimized)30.1527.1224.3021.7220.54
Bispectral speckle imaging7,9,1630.3427.2624.8922.2420.81

The bold entries represent the highest PSNR for that level of turbulence.

An analysis of the Wiener filter performance, as a function of Γ and β, is provided in Fig. 4 for N=200 and Cn2=1.00×1015  m2/3. Figure 4(a) shows the PSNR surface for the Wiener filter applied to the BMA registered frame average. Note that the optimum β is near 0.1 and the optimum Γ is near 0.0002. This suggests that the BMA registration is about 90% effective in eliminating the RMS tilt (and 10% remains). A similar surface plot is provided in Fig. 4(b) for the globally registered frame average (i.e., no BMA). Here, the optimum β is near 0.5, suggesting that the global registration is about 50% effective in eliminating the RMS tilt. Finally, Fig. 4(c) is for no registration at all. Here, the optimum β is approaching 1.0, as would be expected for an LE image. This analysis shows that the β parameter should be matched to the level of tilt correction provided by the registration.

Fig. 4

Wiener filter parameter optimization using 200 simulated frames with Cn2=1.00×1015  m2/3. Wiener filter operating on: (a) the BMA registered frame average, (b) globally registered frame average, and (c) unregistered frame average. The highest PSNR is marked with a red “x.”

OE_56_7_071503_f004.png

An analysis of the BMA parameters is shown in Fig. 5 for N=200 and Cn2=1.00×1015  m2/3. In particular, this plot shows the PSNR values as a function of B and S. Here, one can see that the optimum block size is near B=15, and the optimum search window size does not increase after S=11. It is clear that small block sizes give much lower PSNRs. This is likely due to an insufficient amount of information for accurate matching, given the atmospheric degradations. Also, one can see that larger search windows generally do not hurt performance, but they do add to the computational load.

Fig. 5

BMA parameter optimization using 200 simulated frames with Cn2=1.00×1015  m2/3.

OE_56_7_071503_f005.png

Figure 6 shows the system PSNR as a function of the number of input frames for Cn2=1.00×1015  m2/3. Performance increases dramatically for the first 30 frames or so and then becomes more incremental. However, additional frames continue to improve performance. Note that the curve is not monotonically increasing. The drops are likely due to the introduction of frames with large shifts, relative to the truth image.

Fig. 6

Restoration performance versus the number of input frames for Cn2=1.00×1015  m2/3.

OE_56_7_071503_f006.png

Estimation of the atmospheric coherence diameter is illustrated in Fig. 7. The continuous curve shows the relationship from Eq. (3). The five simulation turbulence levels are shown with blue circles. The red squares show the estimated parameters from Eq. (27), using the BMA motion vectors with the parameters in Table 3, and β=0.1. This result appears to show a promising level of agreement between the estimates and true r0 values.

Fig. 7

Simulation results for the atmospheric coherence diameter (Fried parameter) estimation from the BMA motion vectors using Eq. (27). Here, the N=200 simulated frames are used at the five Cn2 levels. The same BMA parameters are used as shown in Table 3 and β=0.1.

OE_56_7_071503_f007.png

Let us now turn our attention to image results. The truth image is shown in Fig. 8. Several output images, formed using N=200 and Cn2=0.25×1015  m2/3, are shown in Fig. 9. Figure 9(a) shows a single raw frame. Figure 9(b) shows the temporal frame average with no registration. The subpixel BMA registered frame average is shown in Fig. 9(c). Finally, the subpixel BMA + average + Wiener filter output is shown in Fig. 9(d). Here, the fixed-parameter Wiener filter is used. Note that the temporal average in Fig. 9(b) is rather blurry, as it is effectively equivalent to the true image corrupted with the LE PSF. The BMA registered average has corrected geometry and a blur level that is comparable to the observed SE frames. We see that by matching the PSF to the BMA registered average excellent results are possible, as shown in Fig. 9(d).

Fig. 8

Truth image used for simulated image sequence.

OE_56_7_071503_f008.png

Fig. 9

Restoration results using N=200 frames with Cn2=0.25×1015  m2/3. (a) Raw frame 1, (b) raw frame average (no registration), (c) BMA registered frame average, and (d) BMA (sub) + average + Wiener filter output. Video 1 (MOV, 824 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.1]) shows the raw frames on the left and BMA on the right.

OE_56_7_071503_f009.png

A similar set of results is shown in Fig. 10. These images are the same as Fig. 9, except we have increased the turbulence to Cn2=1.00×1015  m2/3. The raw SE frame is noticeably more corrupted than in Fig. 9. Also, the blurring in Figs. 10(b) and 10(c) is far more pronounced than in the corresponding images in Fig. 9. However, despite the increased turbulence, the subpixel BMA + average + Wiener output in Fig. 10(d) maintains much of the original detail. This is a consequence of having a very high signal-to-noise ratio, by virtue of the large number of input frames, and of an effective match between the PSF model and the blur in BMA registered average.

Fig. 10

Restoration results using N=200 frames with Cn2=1.00×1015  m2/3. (a) Raw frame 1, (b) raw frame average (no registration), (c) BMA registered frame average, and (b) BMA (sub) + average + Wiener filter output. Video 2 (MOV, 816 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.2]) shows the raw frames on the left and BMA on the right.

OE_56_7_071503_f010.png

4.1.2.

Results using 30 simulated frames

The next set of results is for the restoration methods using N=30 input frames. The quantitative PSNR results are shown in Table 5. The results are similar to those in Table 4, but as expected, with fewer frames the PSNR values drop somewhat. The best results in Table 5 are for the subpixel BMA + average + Wiener filter, and these results are significantly better than those of the benchmark methods.

Table 5

PSNR (dB) results using 30 frames of simulated data with ση=2.0.

Cn2×1015 (m−2/3)
Method0.100.250.501.001.50
Raw frames22.8721.1720.0419.0718.56
Average24.0322.3721.1420.0119.41
Global registration + average24.9923.5022.1420.8020.09
BMA + average25.4324.3423.0321.5420.71
BMA (sub) + average25.5524.4323.0921.5820.73
NDL image2125.2324.2423.1321.6020.62
NRIR21 + average25.2224.0722.7221.1020.18
Average + Wiener (optimized)29.0725.5223.5922.0321.14
Average + Wiener (Γ=0.0013, β=1.0)28.9125.3623.4521.9921.12
Global registration + average + Wiener (optimized)31.8428.6125.9823.7622.63
Global registration + average + Wiener (Γ=0.0013, β=0.5)31.7528.6125.9823.7622.62
BMA + average + Wiener (optimized)32.2830.7628.9426.2324.31
BMA + average + Wiener (Γ=0.0013, β=0.1)32.2830.7528.9426.2224.29
BMA (sub) + average + Wiener (optimized)32.5531.1029.2426.4424.42
BMA (sub) + average + Wiener (Γ=0.0013, β=0.1)32.5431.1029.2426.4124.39
NDL21 + Wiener-diffraction (optimized)29.2526.6924.4422.0920.86
NRIR21 + average + Wiener-diffraction (optimized)29.7226.8224.1421.6420.48
Bispectral speckle imaging7,9,1629.5026.6324.2921.8720.55

The bold entries represent the highest PSNR for that level of turbulence.

To allow for a subjective comparison of the proposed method and the benchmark methods, output images from several methods are shown in Fig. 11 for the N=30 frame input, with Cn2=1.00×1015  m2/3. Figure 11(a) shows the temporal average, followed by the Wiener filter, using the LE PSF model. Figure 11(b) shows the NRIR + NDL image from Zhu and Milanfar,21 followed by the Wiener filter using the diffraction-only PSF model. Figure 11(c) shows the bispectral speckle image output.7,9,16 Finally, Fig. 11(d) shows the BMA + average + Wiener filter output, with subpixel BMA and fixed-parameter Wiener filter. The result in Fig. 11(a) is limited because no tilt correction is used, and the LE PSF is used in the Wiener filter. The NRIR + NDL + Wiener filter image in Fig. 11(b) provides improved results, but some areas remain highly blurred and there appear to be some artifacts at the edges. The bispectrum output in Fig. 11(c) also looks better than the LE restoration in (a) but is fundamentally limited by its use of the LE PSF model in recovering the magnitude frequency spectrum. The bispectrum method also tends to suffer from tiling artifacts when treating high levels of turbulence, as can be seen in Fig. 11(c). Processing without using tiles eliminates the tiling artifacts but leads to a lower quantitative performance (hence the use of tiles here). The subpixel BMA + average + Wiener filter output in Fig. 11(d) appears to have the best overall detail, with no major artifacts. This is supported by the quantitative analysis in Table 5.

Fig. 11

Restoration results using 30 frames with Cn2=1.00×1015  m2/3. (a) Raw frame average (no registration) + Wiener filter, (b) NRIR + NDL21 + Wiener (diffraction), (c) bispectral speckle imaging method,7,9,16 and (d) BMA (sub) + average + Wiener filter output.

OE_56_7_071503_f011.png

The processing time for the various algorithms and their components is provided in Table 6. Note that the proposed method has a significantly shorter run time than the benchmark methods using our MATLAB® implementations. However, run times with other implementations may differ. For the bispectral imaging method, processing time can be reduced by reducing the number of tiles and eliminating tile-based registration. Furthermore, hardware acceleration can be employed to speed up the multidimensional FFTs used with this method.

Table 6

Algorithm run times for processing 30 simulated 257×257  pixel frames to produce a single output frame. Processing was done with MATLAB® 2016a using a PC with Intel(R) Xeon(R) CPU E5-2620 v3 at 2.40 GHz, 16 GB RAM, and running Windows 10. For the BMA method S=11, B=15, and 3× subpixel BMA.

Algorithm/componentRun time (s)
BMA + average + Wiener9.60
BMA (sub) + average + Wiener203.29
NRIR + NDL + Wiener213125.26
Bispectrum7,9,16613.65
Global registration1.03
Whole pixel BMA8.56
Subpixel BMA202.25
NRIR213090.25
Average image fusion0.0012
NDL image fusion2135.00
Wiener filter0.0125

4.2.

Real Data

Our final set of experimental results uses a real-image sequence acquired from a tower to a truck and an engineering resolution target at a distance of 5 km. The resolution target is made up of a sequence of vertical and horizontal three-line groups. The five large groups on the right side have bars with the following widths: 7.00, 6.24, 5.56, 4.95, and 4.91 cm. The optical parameters for this sensor are listed in Table 7. The sensor sampling is very close to Nyquist, so the Wiener filter is evaluated and implemented at the pixel pitch of the sensor (i.e., no resampling of the imagery is performed). A scintillometer is used to provide an estimate of r0, as shown in Table 7. This value has been confirmed by analysis of an edge target, imaged within the larger field of view of the camera. Assuming a constant Cn2(z) profile, note that the isoplanatic angle, when converted to pixels, is only 0.25 pixels. This gives rise to warping that is highly uncorrelated at a small scale. This makes BMA registration somewhat less effective than we saw in the simulated data. For this reason, we have chosen to use a residual RMS tilt factor of β=0.4 (compared with β=0.1 for the simulated data). An estimate of the atmospheric coherence diameter using Eq. (27), for β=0.4, is shown in Table 7.

Table 7

Optical parameters for the real sensor data.

ParameterValue
ApertureD=57.150  mm
Focal lengthl=926  mm
f-numberf/#=16.203
Wavelengthλ=0.785  μm
Spatial cutoff frequencyρc=78.620  cyc/mm
Object distanceL=5  km
Nyquist pixel spacing (focal plane)δN=6.36  μm
Detector pitch (focal plane)δs=6.50  μm
Undersampling1.022
Scintillometer path averaged Cn2(z)Cn2=7.44×1015  m2/3
Isoplanatic angle (assuming constant Cn2(z))θ0=0.26 (Pixels)
Scintillometer Fried parameterr0=28.4  mm
Estimated Fried parameter (for β=0.4)r^0=26.0  mm
Theoretical RMS tilt for scintillometer r0σr=1.46 (Pixels)
BMA estimated RMS tilt (for β=0.4)σ^r=1.61 (Pixels)

The image results using the real data are shown in Fig. 12. Figure 12(a) shows raw frame 1. The NRIR + NDL21 + Wiener filter output using N=30 input frames is shown in Fig. 12(b). The bispectrum output7,9,16 is shown in Fig. 12(c), also for N=30 and using the scintillometer r0. The 30 frame average + Wiener filter using the LE PSF with scintillometer r0 is shown in Fig. 12(d). The subpixel BMA + average + Wiener output is shown for N=30 and N=200 in Figs. 12(e) and 12(f), respectively. Here, we use the r^0 estimated from the BMA, with β=0.4 and Γ=0.0002. Note that the results obtained using the scintillometer r0 are very similar. The BMWF results appear to provide the best overall subjective quality and recover the resolution target lines notably better than the benchmark methods. We attribute this to a reduction in tilt blurring, by means of the BMA registration, and by the proper matching of the PSF model to the residual tilt blurring.

Fig. 12

Restoration results using real-image sequence. (a) First raw frame, (b) 30 frame NRIR + NDL21 + Wiener (diffraction) output, (c) 30 frame bispectral speckle imaging output,7,9,16 (d) 30 frame average + Wiener filter output, (e) 30 frame BMA (sub) + average + Wiener filter output, and (f) 200 frame BMA (sub) + average + Wiener filter output. Video 3 (MOV, 507 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.3]) shows the raw frames on the top and BMA on the bottom.

OE_56_7_071503_f012.png

5.

Conclusions

We have presented a block-matching and Wiener filter-based approach to optical turbulence mitigation. In addition to the restoration method, we have also presented a method for estimating the atmospheric coherence diameter from the BMA motion vectors. We demonstrate the efficacy of this method quantitatively, using simulated data from a simulation tool developed by one of the authors. Results using real data are also provided for evaluation. The proposed restoration method utilizes a parametric OTF model for atmospheric turbulence and diffraction that incorporates the level of tilt correction provided by the registration step. By matching the PSF model to the level of registration, improved results are possible, as shown in Fig. 4. For the BMA component of our algorithm, we present a few innovations. For example, we use a search window size determined by the theoretical RMS tilt, when r0 is available. The BMA also uses a prefilter on the raw frames, so they better match the prototype in spatial frequency content. Compared with benchmark methods, the proposed method provides the highest PSNR restorations in our study, as shown in Tables 4 and 5.

We quantify the level of registration tilt correction by what we term the residual RMS tilt factor, β, or equivalently, the tilt variance reduction factor, α=1β2. Recall that β is such that the residual RMS tilt, after registration, is βσr, where σr is the theoretical uncorrected RMS tilt given in Eq. (3). Given α and r0 and the optical system parameters, the degradation OTF model is given by Eq. (11) and the Wiener filter is given by Eq. (25). We have demonstrated that α can have a significant impact on the degradation OTF and corresponding restored image OTF, as shown in Figs. 1 and 3, respectively.

In cases where r0 is known, it is possible to infer β from Eq. (27), using the BMA motion vectors. If one does not have knowledge of r0, it can be estimated from Eq. (27) using an assumed β and the BMA motion vectors. Thus, a complete restoration can be achieved with the assumption of only one unknown parameter, β (or α). Note that this parameter is linked to the size of the BMA block size B, along with the camera parameters, and the Cn2(z) profile. We achieved excellent results using a constant B=15  pixels, assuming a corresponding β=0.1 for the simulated data and β=0.4 for the real data studied here. In practice, it may be possible to perform a search over β and evaluate the results subjectively or by some other metric.

Acknowledgments

The authors would like to thank Dr. Doug Droege at L-3 Communications Cincinnati Electronics for providing support for this project. We would like to also thank Matthew D. Howard at AFRL for assisting with data collection. This work has been supported in part by funding from L-3 Communications Cincinnati Electronics and under AFRL Award Nos. FA8650-10-2-7028 and FA9550-14-1-0244. It has been approved for public release (PA Approval # 88ABW-2016-4934).

References

1. 

R. E. Hufnagel and N. R. Stanley, “Modulation transfer function associated with image transmission through turbulent media,” J. Opt. Soc. Am., 54 52 –61 (1964). http://dx.doi.org/10.1364/JOSA.54.000052 JOSAAHJOSAAH 0030-3941 Google Scholar

2. 

D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am., 56 1372 (1966). http://dx.doi.org/10.1364/JOSA.56.001372 JOSAAH 0030-3941 Google Scholar

3. 

M. Roggemann and B. Welsh, Imaging Through Turbulence, CRC Press, Boca Raton, Florida (1996). Google Scholar

4. 

R. Tyson, Principles of Adaptive Optics, Academic Press, Boston, Massachusetts (1998). Google Scholar

5. 

A. W. M. van Eekeren et al., “Turbulence compensation: an overview,” Proc. SPIE, 8355 83550Q (2012). http://dx.doi.org/10.1117/12.918544 PSISDG 0277-786X Google Scholar

6. 

T. W. Lawrence et al., “Experimental validation of extended image reconstruction using bispectral speckle interferometry,” Proc. SPIE, 1237 522 –537 (1990). http://dx.doi.org/10.1117/12.19323 PSISDG 0277-786X Google Scholar

7. 

T. W. Lawrence et al., “Extended-image reconstruction through horizontal path turbulence using bispectral speckle interferometry,” Opt. Eng., 31 (3), 627 –636 (1992). http://dx.doi.org/10.1117/12.56083 Google Scholar

8. 

C. L. Matson et al., “Multiframe blind deconvolution and bispectrum processing of atmospherically degraded data: a comparison,” Proc. SPIE, 4792 55 –66 (2002). http://dx.doi.org/10.1117/12.451796 PSISDG 0277-786X Google Scholar

9. 

C. J. Carrano, “Speckle imaging over horizontal paths,” Proc. SPIE, 4825 109 –120 (2002). http://dx.doi.org/10.1117/12.453519 PSISDG 0277-786X Google Scholar

10. 

C. J. Carrano, “Anisoplanatic performance of horizontal-path speckle imaging,” Proc. SPIE, 5162 14 –27 (2003). http://dx.doi.org/10.1117/12.508082 PSISDG 0277-786X Google Scholar

11. 

J. P. Bos and M. C. Roggemann, “Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,” Proc. SPIE, 8056 805603 (2011). http://dx.doi.org/10.1117/12.884093 PSISDG 0277-786X Google Scholar

12. 

J. P. Bos and M. C. Roggeman, “The effect of free parameter estimates on the reconstruction of turbulence corrupted images using the bispectrum,” Proc. SPIE, 8161 816105 (2011). http://dx.doi.org/10.1117/12.893859 PSISDG 0277-786X Google Scholar

13. 

J. P. Bos and M. C. Roggemann, “Robustness of speckle-imaging techniques applied to horizontal imaging scenarios,” Opt. Eng., 51 (8), 083201 (2012). http://dx.doi.org/10.1117/1.OE.51.8.083201 Google Scholar

14. 

J. P. Bos and M. C. Roggemann, “Blind image quality metrics for optimal speckle image reconstruction in horizontal imaging scenarios,” Opt. Eng., 51 (10), 107003 (2012). http://dx.doi.org/10.1117/1.OE.51.10.107003 Google Scholar

15. 

G. E. Archer, J. P. Bos and M. C. Roggemann, “Reconstruction of long horizontal-path images under anisoplanatic conditions using multiframe blind deconvolution,” Opt. Eng., 52 (8), 083108 (2013). http://dx.doi.org/10.1117/1.OE.52.8.083108 Google Scholar

16. 

G. E. Archer, J. P. Bos and M. C. Roggemann, “Comparison of bispectrum, multiframe blind deconvolution and hybrid bispectrum-multiframe blind deconvolution image reconstruction techniques for anisoplanatic, long horizontal-path imaging,” Opt. Eng., 53 (4), 043109 (2014). http://dx.doi.org/10.1117/1.OE.53.4.043109 Google Scholar

17. 

D. Fraser, G. Thorpe and A. Lambert, “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A, 16 1751 –1758 (1999). http://dx.doi.org/10.1364/JOSAA.16.001751 JOAOD6JOAOD6 0740-3232 Google Scholar

18. 

D. H. Frakes, J. W. Monaco and M. J. T. Smith, “Suppression of atmospheric turbulence in video using an adaptive control grid interpolation approach,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP 2001), 1881 –1884 (2001). Google Scholar

19. 

D. Li, R. M. Mersereau and S. Simske, “Atmospheric turbulence-degraded image restoration using principal components analysis,” IEEE Geosci. Remote Sens. Lett., 4 340 –344 (2007). http://dx.doi.org/10.1109/LGRS.2007.895691 Google Scholar

20. 

X. Zhu and P. Milanfar, “Image reconstruction from videos distorted by atmospheric turbulence,” Proc. SPIE, 7543 75430S (2010). http://dx.doi.org/10.1117/12.840127 PSISDG 0277-786X Google Scholar

21. 

X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal. Mach. Intell., 35 157 –170 (2013). http://dx.doi.org/10.1109/TPAMI.2012.82 ITPIDJITPIDJ 0162-8828 Google Scholar

22. 

R. C. Hardie and D. R. Droege, “Atmospheric turbulence correction for infrared video,” in Proc. of the Military Sensing Symposia (MSS), Passive Sensors, (2009). Google Scholar

23. 

R. C. Hardie, D. R. Droege and K. M. Hardin, “Real-time atmospheric turbulence correction for complex imaging conditions,” in Proc. of the Military Sensing Symposia (MSS), Passive Sensors, (2010). Google Scholar

24. 

R. C. Hardie, D. R. Droege and K. M. Hardin, “Real-time atmospheric turbulence with moving objects,” in Proc. of the Military Sensing Symposia (MSS), Passive Sensors, (2011). Google Scholar

25. 

R. C. Hardie et al., “Real-time video processing for simultaneous atmospheric turbulence mitigation and super-resolution and its application to terrestrial and airborne infrared imaging,” in Proc. of the Military Sensing Symposia (MSS), Passive Sensors, (2012). Google Scholar

26. 

D. R. Droege et al., “A real-time atmospheric turbulence mitigation and super-resolution solution for infrared imaging systems,” Proc. SPIE, 8355 83550R (2012). http://dx.doi.org/10.1117/12.920323 PSISDG 0277-786X Google Scholar

27. 

A. W. M. van Eekeren et al., “Patch-based local turbulence compensation in anisoplanatic conditions,” Proc. SPIE, 8355 83550T (2012). http://dx.doi.org/10.1117/12.918545 PSISDG 0277-786X Google Scholar

28. 

A. W. M. van Eekeren, J. Dijk and K. Schutte, “Turbulence mitigation methods and their evaluation,” Proc. SPIE, 9249 92490O (2014). http://dx.doi.org/10.1117/12.2067327 PSISDG 0277-786X Google Scholar

29. 

K. K. Halder, M. Tahtali and S. G. Anavatti, “Geometric correction of atmospheric turbulence-degraded video containing moving objects,” Opt. Express, 23 5091 –5101 (2015). http://dx.doi.org/10.1364/OE.23.005091 OPEXFFOPEXFF 1094-4087 Google Scholar

30. 

F. Molina-Martel, R. Baena-Gall and S. Gladysz, “Fast PSF estimation under anisoplanatic conditions,” Proc. SPIE, 9641 96410I (2015). http://dx.doi.org/10.1117/12.2194570 PSISDG 0277-786X Google Scholar

31. 

R. C. Hardie et al., “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation,” Opt. Eng., (2017). Google Scholar

32. 

J. D. Schmidt, Numerical Simulation of Optical Wave Propagation with Examples in MATLAB, SPIE Press, Bellingham, Washington (2010). Google Scholar

33. 

F. G. Smith, The Infrared and Electro-Optical Systems Handbook: Volume 2 Atmospheric Propagation of Radiation, SPIE Press, Bellingham, Washington (1993). Google Scholar

34. 

J. W. Goodman, Introduction to Fourier Optics, 3rd ed.Roberts and Company Publishers, Greenwood Village, Colorado (2004). Google Scholar

35. 

M. I. Charnotskii, “Anisoplanatic short-exposure imaging in turbulence,” J. Opt. Soc. Am. A, 10 492 –501 (1993). http://dx.doi.org/10.1364/JOSAA.10.000492 JOAOD6JOAOD6 0740-3232 Google Scholar

36. 

S. Basu, J. E. McCrae and S. T. Fiorino, “Estimation of the path-averaged atmospheric refractive index structure constant from time-lapse imagery,” Proc. SPIE, 9465 94650T (2015). http://dx.doi.org/10.1117/12.2177330 PSISDG 0277-786X Google Scholar

37. 

J. E. McCrae, S. Basu and S. T. Fiorino, “Estimation of atmospheric parameters from time-lapse imagery,” Proc. SPIE, 9833 983303 (2016). http://dx.doi.org/10.1117/12.2223986 PSISDG 0277-786X Google Scholar

38. 

L. Yaroslavsky et al., “Superresolution in turbulent videos: making profit from damage,” Opt. Lett., 32 3038 –3040 (2007). http://dx.doi.org/10.1364/OL.32.003038 OPLEDPOPLEDP 0146-9592 Google Scholar

39. 

L. P. Yaroslavsky et al., “Super-resolution of turbulent video: potentials and limitations,” Proc. SPIE, 6812 681205 (2008). http://dx.doi.org/10.1117/12.765580 PSISDG 0277-786X Google Scholar

40. 

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Int. Joint Conf. on Artificial Intelligence, (1981). Google Scholar

41. 

R. C. Hardie, K. J. Barnard and R. Ordonez, “Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging,” Opt. Express, 19 26208 –26231 (2011). http://dx.doi.org/10.1364/OE.19.026208 OPEXFFOPEXFF 1094-4087 Google Scholar

42. 

R. C. Hardie and K. J. Barnard, “Fast super-resolution using an adaptive Wiener filter with robustness to local motion,” Opt. Express, 20 21053 –21073 (2012). http://dx.doi.org/10.1364/OE.20.021053 OPEXFFOPEXFF 1094-4087 Google Scholar

43. 

Y.-W. Huang et al., “Survey on block matching motion estimation algorithms and architectures with new results,” J. VLSI Signal Process. Syst., 42 297 –320 (2006). http://dx.doi.org/10.1007/s11265-006-4190-4 Google Scholar

44. 

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed.Prentice-Hall, Inc., Upper Saddle River, New Jersey (2006). Google Scholar

45. 

A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, 3rd ed.Prentice-Hall, Inc., Upper Saddle River, New Jersey (2010). Google Scholar

Biography

Russell C. Hardie is a full professor in the Department of Electrical and Computer Engineering, the University of Dayton, with a joint appointment in the Department of Electro-Optics. He received the University of Dayton’s top university-wide teaching award, the 2006 Alumni Award in teaching. He received the Rudolf Kingslake Medal and Prize from SPIE in 1998 for super-resolution research. In 1999, he received the School of Engineering Award of Excellence in teaching and the first annual Professor of the Year Award in 2002 from the student chapter of the IEEE.

Michael A. Rucci is a research engineer at the Air Force Research Laboratory, Wright-Patterson AFB, Ohio. His current research includes day/night passive imaging, turbulence modeling and simulation, and image processing. He received his MS and BS degrees in electrical engineering from the University of Dayton in 2014 and 2012, respectively.

Alexander J. Dapore is a senior image processing engineer at L-3 Cincinnati Electronics. He received his BSEE and MSEE from the University of Illinois at Urbana–Champaign in 2008 and 2010, respectively. He has worked on research and development projects in many areas of digital image processing. His specific areas of interest are image restoration, image enhancement, object/threat detection and tracking, multiview computer vision, and the real-time implementation of digital image processing algorithms on image processing algorithms using general-purpose computing on graphics processing units.

Barry K. Karch is a principal research electronics engineer at the Multispectral Sensing and Detection Division, Sensors Directorate of the Air Force Research Laboratory, Wright-Patterson AFB, Ohio. He received his BS degree in electrical engineering in 1987, his MS degree in electro-optics and electrical engineering in 1992 and 1994, respectively, and his PhD in electrical engineering in 2015 from the University of Dayton, Dayton, Ohio. He has worked in the areas of EO/IR remote sensor system and processing development for 29 years.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Russell C. Hardie, Michael A. Rucci, Alexander J. Dapore, and Barry K. Karch "Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis," Optical Engineering 56(7), 071503 (6 February 2017). https://doi.org/10.1117/1.OE.56.7.071503
Received: 7 October 2016; Accepted: 29 December 2016; Published: 6 February 2017
Lens.org Logo
CITATIONS
Cited by 27 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Filtering (signal processing)

Point spread functions

Image registration

Electronic filtering

Optical transfer functions

Turbulence

Optical filters

Back to Top