Open Access
29 April 2014 Comparison of bispectrum, multiframe blind deconvolution and hybrid bispectrum-multiframe blind deconvolution image reconstruction techniques for anisoplanatic, long horizontal-path imaging
Author Affiliations +
Abstract
The potential benefits of real-time, or near-real-time, image processing hardware to correct for turbulence-induced image defects for long-range surveillance and weapons targeting are sufficient to motivate significant resource commitment to their development. Quantitative comparisons between potential candidates are necessary to decide on a preferred processing algorithm. We begin by comparing the mean-square-error (MSE) performance of speckle imaging (SI) methods and multiframe blind deconvolution (MFBD), applied to long-path horizontal imaging of a static scene under anisoplanatic seeing conditions. Both methods are used to reconstruct a scene from three sets of 1000 simulated images featuring low, moderate, and severe turbulence-induced aberrations. The comparison shows that SI techniques can reduce the MSE up to 47%, using 15 input frames under daytime conditions. The MFBD method provides up to 40% improvement in MSE under the same conditions. The performance comparison is repeated under three diminishing light conditions, 30, 15, 8 photons per pixel on average, where improvements of up to 39% can be achieved using SI methods with 25 input frames, and up to 38% for the MFBD method using 150 input frames. The MFBD estimator is applied to three sets of field data and representative results presented. Finally, the performance of a hybrid bispectrum-MFBD estimator that uses a rapid bispectrum estimate as the starting point for the MFBD image reconstruction algorithm is examined.

1.

Introduction

There is interest in the development of human-portable surveillance systems capable of observing a wide field of view over long horizontal or near-horizontal paths. In addition, remotely piloted surveillance systems must transmit images to remote users over wireless channels that have limited bandwidth. As a result, currently fielded surveillance systems compress images prior to transmission.1 An image processing system that could reconstruct images corrupted by turbulence prior to compression and transmission would allow such systems to make more efficient use of available bandwidth. Thus, effort must be directed toward the examination and selection of computationally efficient and robust image reconstruction techniques. Under all but the most benign conditions, temperature inhomogeneities in the atmosphere result in turbulence and variations in the index of refraction along the imaging path. Even small variations in the index of refraction cause changes in the optical path length that result in phase aberrations at the aperture.2 As the optical path length increases, or the turbulence strength increases, the aberrations become stronger, and the isoplanatic angle decreases. Terrestrial surveillance scenarios often involve imaging scenes that subtend angles much larger than the isoplanatic angle, a situation referred to as anisoplanatic imaging. Under these conditions, increasing the size of the aperture will not improve the quality of the image. Unless the seeing conditions are very favorable, anisoplanatism will dominate most practical horizontal imaging situations. The resulting distortion will limit the performance of any optical system operating in such a turbulent atmosphere, frequently causing the observed scene to be blurred beyond usefulness.3

Many strategies have been proposed to estimate the true scene from turbulence-corrupted images, not all of which are explored in this work. Each represents a compromise between how quickly the technique produces an estimate of the scene, the accuracy of the estimate, and the robustness of the technique. Quick and accurate convergence under a wide range of atmospheric seeing conditions, regardless of the scene illumination, while remaining transportable by a single operator are desirable characteristics. Power and portability constraints may eliminate traditional adaptive optics approaches that have been applied to similar problems in the imaging of space objects4 using currently available technology. An additional limitation to adaptive optic systems is that they only correct well over a few multiples of the isoplanatic angle, while common horizontal surveillance scenarios observe scenes that extend over many multiples of the isoplanatic angle.

Ayers and Dainty,5 among others, and Lohmann et al.6 determined that near diffraction-limited phase information could be recovered from multiple-speckled images through the use of Knox-Thompson and bispectrum techniques.7 The recovered phase and intensity information are used to reconstruct nearly diffraction-limited images of astronomical objects. Carrano8 proposed the use of speckle imaging (SI) for horizontal imaging and EM Photonics, (Newark, Delaware) has developed an embedded system using the technique to reconstruct turbulent images over horizontal paths in near real time.9 Bos and Roggemann have extended this technique for use over wide fields of view and also explored the performance limits of SI over a range of turbulence levels. Both Carrano and Bos ignore anisoplanatism in their processing but nevertheless produce reconstructions with reduced error.3

Other blind deconvolution techniques have also been applied to reconstructing turbulence-corrupted images. Richardson10 and Lucy11 pioneered the application of an iterative blind deconvolution technique to a single image degraded by atmospheric turbulence. Their method relied on the enforcement of positivity, and finite support constraints on the object estimate. Schulz12 extended that method to include multiple input images and developed a penalized-maximum-likelihood algorithm to avoid the trivial solution that incorrectly concludes that the optical system’s point spread function (PSF) is a Dirac delta function and that the most likely object estimate is the observed turbulent image. Little research has been done to compare the performance characteristics of the multi-frame blind deconvolution (MFBD) and SI techniques under anisoplanatic conditions; that is the subject of this work.

This article compares the performance of SI techniques and MFBD at estimating object intensities from simulated images exhibiting extreme levels of anisoplanatism, using the number of input frames and scene illumination levels as the operational parameters. To enable this comparison, the authors developed a horizontal imaging simulation model and used it to create three sets of 1000 simulated, turbulent images based on the image processing Lena image, common in the literature. In order to explore the technique’s performance under multiple illumination conditions, each set of simulated turbulent images was used to generate a set of speckle images with a mean photon count of 30, 15, and 7 photons per pixel. The effectiveness of SI and MFBD is assessed by calculating the mean-square-error (MSE) between the resulting object estimate and the diffraction limited image.

We find that the both SI- and MFBD-reconstructed objects show significant improvement in MSE compared with the average MSE over the entire data set. SI reduces the MSE 46%, 42%, and 47% on average for low, moderate, and severe turbulence cases respectively, using 15 input frames, under daytime illumination. Similarly, the MFBD method provides 40%, 25%, and 36% improvements in MSE on average under the same conditions. Under low-light conditions (fewer than 30 photons per pixel), improvements of 39%, 29%, and 27% are available using SI with 25 input frames used in the reconstruction, and 38, 34 and 33, respectively, for the MFBD method using 150 input frames.

The remainder of this article is organized as follows. In Sec. 2, we discuss horizontal imaging as it applies to the reconstruction of turbulent images under anisoplanatic conditions and develop foundations of the bispectrum and the objective functions to be applied to the optimization routine. In Sec. 3, the object recovery methods used for SI and MFBD are described for both Gaussian and Poisson noise models. In Sec. 4, the comparative results of processing for SI and MFBD are presented under all three turbulence cases and, in the case of the Poisson noise model, for three turbulence conditions and three values of mean photon counts. Section 5 presents the results of MFBD performance using field data followed by the results of a processing strategy that uses the bispectrum estimate to jumpstart the MFBD estimator in Sec. 6. Summary conclusions are offered in Sec. 7.

2.

Background

2.1.

Horizontal Imaging

We begin our discussion by examining some assumptions regarding the horizontal-imaging problem. The atmospheric coherence radius, r0, is commonly used to define the effective aperture radius of an imaging system. Increasing the size of the aperture beyond r0 will not improve the resolution of the system, when imaging in the presence of anisoplanatism. For spherical wave propagation, the atmospheric coherence radius, r0, is defined by Frieds13 and Roggemann and Welsh2 as

Eq. (1)

r0=0.185[4π2k20L(LzL)5/3Cn2(z)dz]3/5,
where λ is the mean wavelength, the wave number k=2π/λ. The limits of the integral are from z=0 at the pupil plane to the scene at z=L. For purposes of this study, it is assumed that in Eq. (1) that the refractive-index structure constant Cn2(z) does not vary with location, for paths that are horizontal or at shallow angles from the aperture.14 This assumption simplifies the expression of the atmospheric coherence radius given in Eq. (1) to

Eq. (2)

r0=(0.16k2Cn2L)3/5.

It is further assumed that the strength of the turbulence is such that scintillation effects can be neglected. Evaluating these expressions for the simulated propagation path L=1km and index of refraction structure Cn2 for the low-turbulence condition Cn2=2.25×1014m(2/3), and r0=3.33cm. Simulation conditions for the moderate condition Cn2=3.75×1014m(2/3) yields r0=2.45cm. For the severe turbulence condition, Cn2=5.25×1014m(2/3) and r0=2.01cm. A complete description of the simulator used to create these data sets is available in Ref. 15.

Using Fried’s16 definition for the isoplanatic angle, θo,

Eq. (3)

θo=1.09k2Cn2(Δz(5/3))(3/5),
and evaluating for the conditions under consideration, we find the isoplanatic angle to be between 10.8 and 6.63 μrad. For the conditions simulated here, a single pixel in the simulated imaging system captures 2.79 μrad. Expressing the θ0 values for the low, medium, and severe turbulence conditions of the simulation, we see that the isoplanatic patch covers 4, 3, and 2 pixels in the simulated imaging system and that anisoplanatic effects are present in the simulated images. These results are summarized in Table 1, and samples of the simulated images are shown in Fig. 1.

Table 1

Atmospheric simulation turbulence conditions. This table presents parameters associated with the three turbulence conditions used in the study.

Atmospheric turbulence parameters
Turbulence conditionCN2 [m(2/3)]r0 Spherical case (cm)θ0 (μrad)θ0 (pixels)
Low2.25×10143.3310.84
Moderate3.75×10142.457.753
Severe5.25×10142.016.632

Fig. 1

Horizontal imaging simulator output. Single-image representative samples of the horizontal image simulator output (a) diffraction-limited image, (b) image for CN2=2.25×1014m(2/3), (c) image for 3.75×1014m(2/3), and (d) image for 5.25×1014m(2/3).

OE_53_4_043109_f001.png

2.2.

Speckle Imaging

Atmospheric turbulence can also be described by a correlation time constant τc. The phase perturbation at every multiple of τc can be considered independent of the previous and next interval. One strategy to mitigate the effects of atmospheric turbulence is the use of short exposure images, where the detector is exposed to light for a period less than τc to freeze the atmospheric turbulence in a single, independent realization of the phase perturbation. The term “speckle imaging” arises from astronomical imaging, where short exposure images of stellar objects exhibit high-frequency intensity fluctuations, or “speckles.”

Noise-free image formation in the spatial domain is the convolution of the atmospheric PSF and the object intensity distribution,

Eq. (4)

i(x)=o(xo)*h(x),
where x is a two-dimensional (2-D) coordinate system in the image plane, and xo is a 2-D coordinate system in the object plane. A set of images that have been corrupted by additive Gaussian noise can be described.

Eq. (5)

dk(x)=i(x)+nk(x)=o(x)*hk(x)+nk(x),
where nk(x) represents an additive noise term characterized by an independent, identically distributed Gaussian random variable with zero mean and variance σ2. Taking the Fourier transform of both sides of Eq. (4) allows a multiplication in the Fourier spatial-frequency domain. The product of the complex object spectrum O(f) with the atmospheric optical transfer function (OTF) Hk(f), results in the received complex image spectrum Dk(f).

Eq. (6)

Dk(f)=O(f)×Hk(f)+Nk(f).

Two independent steps are required to recover the amplitude and phase information associated with the received complex object spectrum. First, inverse pseudo-Wiener filtering the ensemble average power spectral density (PSD) yields an estimate of the power spectrum of the object in the scene.

Eq. (7)

|D(f)2|=|HLE(f)|2|O(f)|2,
where HLE(f) represents the long exposure OTF of the imaging system and turbulent atmosphere and represents an ensemble average.

Eq. (8)

|O(f)|2=|D(f)2||HLE(f)|2+α.

For our processing, the theoretical long-exposure atmospheric OTF of the simulated imaging system is applied to estimate the amplitude of the complex object spectrum.13,17 The additive term α is a constant adjusted by the user that compensates for the fact that at high signal-to-noise ratios the PSD of the OTF may go to zero.

The amplitude of the PSD is insufficient to form an image. A separate process is required to recover the phase information in the speckle images. The object phase information is recovered by computing the bispectrum,7 defined as

Eq. (9)

B(f1,Δf)=D(f1)I(Δf)D*(f1+Δf),
where D*(f) represents the complex conjugate of D(f), and Δf represents a constant spatial frequency offset.

It can be shown2 that the object phase information can be recovered from the bispectrum by substituting the image spectrum D(f) defined in Eq. (6) into the the definition in Eq. (9). The estimate of the object phase can be determined at any arbitrary spatial frequency f=f+Δf in terms of the bispectrum estimate, assuming the phases at f and Δf are known.

Eq. (10)

ϕ^o(f)=ϕ^o(f)+ϕ^o(Δf)ϕ^B(f,Δf).

Because images are real valued, the phase of f=0 is zero. The adjacent frequency points (0,Δf), (0,Δf), (Δf,0), (Δf,0) can also arbitrarily be set to zero without loss of image quality. This assumption will result in loss of image registration information, and reconstructed images are often not centered within the image frame. Further processing can recenter the reconstructed images.

2.3.

MFBD for the Gaussian Noise Model

MFBD is another common image reconstruction method5 that uses an iterative process to recover object amplitude and phase information. The term i(x) is the noise-free image characterized in Eq. (4) as the convolution of a deterministic object and the imaging system’s incoherent PSF. The incoherent PSF h(x) is the modulus squared of the coherent PSF.

Eq. (11)

h(x)=|g(x)|2,
where the coherent PSF is given by

Eq. (12)

g(x)=xpAejθ(xp)ej2πN(xxp),
where A is a binary aperture function whose value is zero outside the aperture, xp is a 2-D coordinate system in the aperture plane, the • operator represents the inner product, and

Eq. (13)

θ(xp)=ϕk(xp)αk.

The phase term θ(xp) represents the accumulated phase perturbation in the aperture of the imaging system decomposed to a linear sum of orthonormal basis functions, ϕk(xp) and αk are the weighting coefficients for each term and image. Zernike polynomials are a convenient set of basis functions commonly used in describing classical aberrations in optical systems.

Using a Gaussian noise model, each point x in each image dk(x) is a random variable with a Gaussian probability density function (PDF). The PDF of dk(x) is parameterized by the object intensities o(x) and the vector of aberration weighting coefficients αk.4

Eq. (14)

p[dk(x);i(x,αk)]=1(2πσn2)1/2exp{[dk(x)ik(x,αk)]22πσn2}
and the likelihood of the complete data set consisting of all the pixel intensities in all the corrupted images is given by

Eq. (15)

p{[dk(x)];i(x,αk)}=k=1Kxϵχ1(2πσn2)1/2exp{[dk(x)ik(x,αk)]22πσn2}.

Taking the natural log of Eq. (15) makes the analysis more tractable. The resulting summation, neglecting a constant term, yields the log-likelihood function

Eq. (16)

L[i(x,αk)]=k=1Kxϵχ[dk(x)i(x,αk)]2.

The processing objective is to maximize this cost function, parameterized in terms of the true object ox and the vector of Zernike coefficients, αk that expand the phase terms θ(xp). This is a complex nonconvex optimization problem that may not yield a global minimum. Further discussion is provided in Sec. 3.1. To maximize this cost function, the optimization routine needs at least an estimate of the gradient, [i(x,αk)], and Hessian, 2[i(x,αk)], of the likelihood function. Although an analytic form of the Hessian is not required, optimization is much more efficient if an analytic form of the gradient is provided. The expression of the gradient is derived in Appendix A.

2.4.

Poisson Noise Model MFBD

Not all images are taken in full daylight. At low light levels, photon noise may dominate image frames. This is often characterized by a grainy quality of the images due to the random arrival times of discrete photons. Photon noise in images is described by modeling the number of photons detected in an image frame at each pixel as a Poisson random variable with a mean photon count rate λ, which is proportional to average pixel intensity. For this simulation, the number of photons detected at each detector pixel is assumed to be an independent, Poisson distributed random variable with a mean rate given by a noiseless diffraction-limited image i(x). The random nature of the PSF is neglected. The probability of detecting dk(x) photons at a specific pixel location is given by

Eq. (17)

p[dk(x)]=i(x,αk)dk(x)e[ik(x,αk)]dk(x)!.

The distribution over the entire set of pixel locations dk is given by4

Eq. (18)

p[{dk}]=k=1Kxϵχi(x,αk)dk(x)e[ik(x,αk)]dk(x)!.

As before, taking the natural log yields a modified log-likelihood function

Eq. (19)

Lpoisson[i(x,α)]=k=1Kxϵχ{dk(x)ln[ik(x,αk)]ik(x)}k=1Kxϵχdk(x)!
where the last term is a constant and can be neglected. The expression for the gradient of the Poisson noise model log-likelihood function is derived in Appendix B.

3.

Methods

Our simulations assume that the propagation occurs over horizontally homogeneous conditions with both the object and the imaging system immersed in a turbulent atmosphere. Furthermore, we assume that the height above ground does not vary significantly between imaging system and a static scene. In addition, Cn2 is assumed to be constant over the propagation path.14 We assume that the simulated data has effectively frozen the turbulence at the moment the turbulent image is created. Reconstruction processing begins by selecting images from the complete data set in groups of Nf=2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, and 23 and 25, 50, 75, 100, 125, 150, 175, and 200 frames. Once the number of frames to process, Nf, is determined, a stack of Nf images is selected for processing. The number of reconstructions, N, used to determine the average MSE declines as the number of frames used in the reconstruction increases. For example, For Nf=2 through 25, N=40. When N=20 for Nf=50, N=10 for Nf=75 and 100. When 200 frames are applied to the reconstruction algorithm, only five independent trials are available.

Prior to applying the simulated turbulent images to the reconstruction algorithms, they must be recentered, as tilt is not estimated in the MFBD algorithm. This was accomplished by using a correlation filter to compare each image in the stack to an ensemble average image and then shifting the turbulent image to recenter it. When the optimization routine exits, the recovered object is compared with the diffraction-limited image and the MSE is averaged over all pixels after both images are normalized to a maximum value of 255, as follows

Eq. (20)

MSE=1=1N[f(x)f^(x)]2N2,
where f(x) is the normalized diffraction-limited image, f^(x) is the current-normalized estimate of the object, and N2 is the total number of pixels in the image. The image stack is incremented to begin processing the next group of K turbulent images and the process repeated. When the entire data set has been processed, the average of the vector of MSEs for images processed Nf at a time is calculated. To facilitate direct comparison between the reconstruction methods, exactly the same simulated turbulent images were applied to each estimator in every trial for both methods, and the MSE was calculated from the recovered images.

3.1.

Multiframe Blind Deconvolution

In order to reduce edge effects in the reconstructed object, each image in the data set was preprocessed to pad the recentered image by replicating the edges of the image outward and then adding a border of zeros. The images as applied to the estimator were padded with 8 replicated pixels followed by 5 zero pixels at the margins of each image, bringing the total size of the image to 256×256pixels.

We jointly process all images and all Zernike coefficients, thus for a data set of K, N×N images, using J Zernike polynomial terms, there will be N2+J×K parameters that must be jointly estimated. To make the optimization tractable and reduce run times, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS)15 method is used to optimize the cost functions in Eqs. (16) and (19) to find the object and aberration coefficients most likely to have produced the simulated images in the data set. One of the characteristics of line-search methods like L-BFGS is the need for an estimate of the Hessian 2f(x). The limited memory form of the BFGS does not require an explicit expression for 2f(x). It estimates the value of the Hessian matrix by maintaining the last few updates of f(x) and f(x). L-BFGS is a quasi-Newtonian, “hill-climbing” technique that begins with an initial guess at a solution for x0 and then proceeds along a line in the direction pointed to by the gradient of the objective function evaluated at each location. For MFBD processing, the spatial average of all the frames used in the trial was used as starting point for the estimator. In Sec. 6, we propose a hybrid bispectrum-MFBD (B-MFBD) method, in which a bispectrum object estimate is used to provide the starting point for the MFBD estimator. The initial guess Zernike coefficients applied to the estimator for both techniques are random Gaussian numbers with a mean of 0.5 and unity variance. Based on previous work,18 for the image reconstruction processing in this article, the number of iterations was limited to 25 and the number of Zernike terms was fixed at 30.

4.

Results

4.1.

Gaussian Noise Model

Figure 2 shows the plots of MSE for the Gaussian case MFBD, and bispectrum estimators, as the number of images, Nf are varied for low, moderate, and severe turbulence conditions. For comparison, the average MSE for the each data set is also provided. We see that at Nf=2 and thereafter both estimators can be expected to perform better than the average MSE for all three simulated turbulence conditions. For the low turbulence case, CN2=2.25×1014m(2/3), marginal improvement in MSE declines at Nf=12, reaching a maximum improvement of 40% over the average MSE over the entire data set. For the moderate turbulence case, CN2=3.75×1014m(2/3), the improvement in MSE available by including additional input frames hits a maximum of 25% of full scale at Nf=14. For the severe turbulence case, CN2=5.25×1014m(2/3), the improvement in MSE available by including additional input frames hits a maximum of 36% of full scale at Nf=14 and neither the MSE nor the standard deviation improves significantly as additional input images are added to the processing stack. However, for low and moderate turbulence cases, if processing time is not of consequence, the MSE and its standard deviation continues to improve as additional images are included or if the number of iterations is increased.18 A sample of the simulated turbulent image data set is compared with a sample-reconstructed object for MFBD and SI as shown in Figs. 3Fig. 45.

Fig. 2

Gaussian noise model, bispectrum, and multiframe blind deconvolution (MFBD) mean-square-error (MSE) versus number of frames. (a) Case 1: low turbulence CN2=2.25×1014m(2/3), (b) Case 2: moderate condition CN2=3.75×1014m(2/3), and (c) Case 3: severe turbulence CN2=5.25×1014m(2/3).

OE_53_4_043109_f002.png

Fig. 3

Gaussian noise model Case 1 sample images. Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object, and (c) a sample bispectrum-reconstructed object.

OE_53_4_043109_f003.png

Fig. 4

Gaussian noise model, Case 2 sample images. Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object, and (c) a sample bispectrum-reconstructed object.

OE_53_4_043109_f004.png

Fig. 5

Gaussian noise model, Case 3 sample images. Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object, and (c) a sample bispectrum-reconstructed object.

OE_53_4_043109_f005.png

4.2.

Poisson Noise Model, Photon Rate 2 × 106, 30 Photons Per Pixel

Each set of 1000 turbulent images, representing the the three turbulence cases, was used to generate a set of speckle images with a mean photon count per image of 2×106. Each set of images was processed using the MFBD and SI methods described above. The MFBD method used the cost function and gradients described in Eqs. (19), (31), and (32).

Figure 6 shows the plots of MSE for the Gaussian MFBD and SI reconstructors, as the number of images, Nf are varied for (a) Case 1: low turbulence CN2=2.25×1014m(2/3), (b) Case 2: moderate condition CN2=3.75×1014m(2/3), and (c) Case 3: severe turbulence CN2=5.25×1014m(2/3). For comparison, the average MSE for each data set is also provided. For the low turbulence case, MFBD performance is less than the data set average until 50 input frames are used in each reconstruction. At Nf=50 and thereafter the estimator can be expected to produce an estimate that has a value lower than 2095, the average MSE of the images in the simulated data set. Marginal improvement in MSE continues as additional frames are added to the image stack, reaching a maximum of about 38% improvement over the average MSE across the data set. Bispectrum achieves similar results with far fewer input frames. For the low turbulence case, MFBD performance approximates that of the bispectrum technique after 125 images are processed in each image stack. For moderate and severe turbulence cases, the use of 150 input frames produces a slightly lower MSE than the SI method. The reduced number of trials for large frame groups is conjectured to be the reason why the MSE plots are not consistently monotonic. Processing times for SI can be a short as 1 min, MFBD can be as short as 30 min when run to convergence. A sample simulated turbulence image is compared with MFBD and SI reconstructed objects in Figs. 7Fig. 89. The MFBD sample was processed using 175 frames; the SI method used 200 frames in the reconstruction.

Fig. 6

Bispectrum and MFBD MSE versus number of frames. Poisson Case, photon rate 2×106 (a) Case 1: low turbulence CN2=2.25×1014m(2/3), (b) Case 2: moderate condition CN2=3.75×1014m(2/3), and (c) Case 3: severe turbulence CN2=5.25×1014m(2/3).

OE_53_4_043109_f006.png

Fig. 7

Case 1 low condition sample images, photon rate 2×106, CN2=2.25×1014m(2/3). Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object (Nf=175), and (c) a sample bispectrum (Nf=200) reconstructed object.

OE_53_4_043109_f007.png

Fig. 8

Case 2 moderate condition sample images, mean photon rate 2×106, CN2=3.75×1014m(2/3). Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed (Nf=175) object, and (c) a sample bispectrum (Nf=200) reconstructed object.

OE_53_4_043109_f008.png

Fig. 9

Case 3 severe condition sample images, mean photon rate 2×106, CN2=5.25×1014m(2/3). Compares (a) a sample simulated turbulent image, (b) a sample MFBD-reconstructed object, Nf=175, and (c) a sample bispectrum reconstructed object Nf=200.

OE_53_4_043109_f009.png

4.3.

Poisson Noise Model, Photon Rate 1 × 106, 15 Photons Per Pixel

Figure 10 shows the plots of MSE for the MFBD, and bispectrum reconstructors, as the number of images, Nf, are varied for (a) Case 1: low turbulence CN2=2.25×1014m(2/3), (b) Case 2: moderate condition CN2=3.75×1014m(2/3), and (c) Case 3: severe turbulence CN2=5.25×1014m(2/3). For comparison, the average MSE for each turbulence condition is also provided. For the low-turbulence case, MFBD performance is less than the input images until 50 input frames are used in each reconstruction. The MFBD estimator produces a lower MSE than that of the bispectrum estimator when the number of frames processed Nf>50. MFBD error reduces as additional frames are added to the image stack, reaching a maximum of about 38% improvement over the average MSE across the data set.

Fig. 10

Bispectrum and MFBD MSE versus number of frames. Poisson case, photon rate 1×106 (a) Case 1: low turbulence CN2=2.25×1014m(2/3), (b) Case 2: moderate condition CN2=3.75×1014m(2/3), and (c) Case 3: severe turbulence CN2=5.25×1014m(2/3).

OE_53_4_043109_f010.png

A sample simulated turbulent image is compared with reconstructed objects in Figs. 11Fig. 1213. The MFBD sample was processed using 175 frames, the bispectrum method used 200 frames in the reconstruction. Processing times for bispectrum are significantly shorter than for MFBD.

Fig. 11

Case 1: low condition CN2=2.25×1014m(2/3), photon rate 1×106. Compares (a) a sample simulated turbulent image, (b) a sample MFBD reconstricted object, Nf=175, and (c) a sample bispectrum reconstructed object Nf=200.

OE_53_4_043109_f011.png

Fig. 12

Case 2: moderate turbulence CN2=3.75×1014m(2/3), mean photon rate 1×106. Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object, and (c) a sample bispectrum-reconstructed object.

OE_53_4_043109_f012.png

Fig. 13

Case 3: severe turbulence CN2=5.25×1014m(2/3), mean photon rate 1×106. Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object, and (c) a sample bispectrum-reconstructed object.

OE_53_4_043109_f013.png

4.4.

Poisson Noise Model, Mean Photon Rate 5 × 1015, 8 Photons Per Pixel

Figure 14 shows the plots of MSE for the Gaussian MFBD and bispectrum estimators, as the number of images Nf are varied. For comparison, the average MSE for the entire data set is also provided. On average, MFBD performance is less than the input images until 25 input frames are used in each reconstruction. At Nf=25 and thereafter, the estimator can be expected to produce an estimate that has a lower MSE than the average MSE of the images in the simulated data set. One possible reason that the SI estimate performance has diminished is that the estimator was optimized for high light levels. Only five bispectrum subplanes were used to reconstruct the phase estimate, which is sufficient for high-light levels, but the inclusion of higher-frequency offsets may increase the noise in the final estimate, thereby degrading the MSE performance. The MSE produced by the MFBD estimator will on average be less than that of SI techniques when the number of frames processed Nf>100. After Nf>125, MSE is not further reduced by MFBD processing. For the moderate and severe turbulence cases shown in Figs. 14(b) and 14(c), the MFBD estimator produces a lower MSE then SI after Nf>100.

Fig. 14

Bispectrum and MFBD MSE versus number of frames. Poisson case mean photon rate 5×105 (a) Case 1: low turbulence CN2=2.25×1014m(2/3), (b) Case 2: moderate condition CN2=3.75×1014m(2/3)m(2/3), and (c) Case 3: severe turbulence CN2=5.25×1014m(2/3).

OE_53_4_043109_f014.png

Sample simulated turbulent images are compared with reconstructed objects in Figs. 15Fig. 1617. The MFBD sample was processed using 175 frames; the bispectrum method used 200 frames in the reconstruction.

Fig. 15

Case 1: low turbulence CN2=2.25×1014m(2/3), mean photon rate 5×105. Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object Nf=175, and (c) a sample Nf=200, reconstructed object.

OE_53_4_043109_f015.png

Fig. 16

Case 2: moderate turbulence CN2=3.75×1014m(2/3), mean photon rate 5×105. Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object, and (c) a sample bispectrum-reconstructed object.

OE_53_4_043109_f016.png

Fig. 17

Case 3: severe turbulence CN2=5.25×1014m(2/3), mean photon rate 5×105. Compares (a) a sample turbulent image, (b) a sample MFBD-reconstructed object, and (c) a sample bispectrum-reconstructed object.

OE_53_4_043109_f017.png

5.

MFBD Estimator Performance Using Field Data

MFBD estimator performance is also evaluated with field images that were gathered as an ancillary part of a free-space-laser communications experiment.19 The transmitter side of the experiment consisted of an 808-nm laser transmitter and a pinwheel target. The receiver end of the experiment was a 300-cm Celestron telescope located 3.046-km downrange. The imaging path extended horizontally over both land and water at 250m above sea level. As part of the communications experiment, simultaneous images and turbulence measurements were taken using a Shack–Hartmann wave front sensor and a point gray CCD camera. The images used in MFBD reconstruction were recorded in the absence of continuous wave front sensor (WFS) measurements because the active laser present in the images recorded for the communications experiment regularly saturated the image sensor. Turbulence measurements were taken for three 27-s intervals pre- and postimage collection. WFS sensor data was not concurrent with the images. Our assumption is that the atmospheric turbulence will not change more rapidly than the data could be recorded. The Fried parameter estimates, r0, are shown in Table 2.

Table 2

Date, time, and wave front sensor turbulence measurement range averaged over 27 seconds pre- and postimage collection as the images in data set 1, 2, and 3 were recorded.

Field data turbulence parameters
Data setDateTimer0 (cm)
Set 1July 1, 200914:00 EDT4.6 to 5.2
Set 2July 14, 200914:00 EDT2.3 to 2.4
Set 3July 24, 200919:00 EDT1.78 to 2.8

Images from the July 1, 2009, data set were processed using the MFBD estimator. A 250×250pixel region of interest was extracted from the recorded images and padded up to 256×256 by replicating the pixels at the margin of the image outward. Fifteen images were processed using a spatially averaged initial estimate. The images were applied to the estimator for 20 iterations, and the result is shown in Figs. 18 and 19.

Fig. 18

Field data reconstruction from July 1, 2009, 1400 EDT data set. A sample input image is shown in (a) and a sample reconstructed image in (b).

OE_53_4_043109_f018.png

Fig. 19

Field data reconstruction from July 14, 1400 EDT data set. A sample input image is shown in (a) and a sample reconstructed image in (b).

OE_53_4_043109_f019.png

Images from July 24, 2009, 1900 EDT data set were processed using the MFBD estimator. A 250×250pixel region of interest was extracted from the recorded images and padded up to 256×256 by replicating the pixels at the margin of the image outward. R0 for this data set ranged from 2.3 to 2.4 cm, substantially smaller than data set 1. To provide more information for the estimator to work with, 25 images were processed using 35 Zernike coefficients and 20 iterations of the estimator. A spatially averaged image was used as a starting point for the optimization routine, and the result is shown in Fig. 20.

Fig. 20

Field data reconstruction from July 24, 2009, 1900 EDT data set. A sample input image is shown in (a) and a sample reconstructed image in (b).

OE_53_4_043109_f020.png

6.

Using Bispectrum as the MFBD Initial Estimate

Both bispectrum and MFBD have strengths to recommend them. Bispectrum produces rapid results but requires an estimate of Cn2 and α to get started. MFBD converges more slowly and has the potential to produce a higher-quality image reconstruction. But, MFBD requires an initial estimate of the object to get started and is subject to trapping in a local minimum of the cost function. The reduced run time of the bispectrum estimator suggests that it may offer a quick way to provide a better starting point for the MFBD estimator. Reconstruction processing in Sec. 3.1 used the spatial average of the input images as a starting point for the L-BFGS optimization routine. A hybrid strategy was attempted to combine the best of both techniques.

In the first case, a bispectrum estimate for the object intensities was used to replace the ensemble average of the input images as the starting point for the MFBD estimator. The MSE was calculated as a function of the number of input frames used in the estimate. Setting the number of processing iterations at Np=40 and limiting the number of input frames to Nf=15 the hybrid method, B-MFBD, reduced the MSE by 16.9%, 18.6%, and 14.9% over MFBD processing using the ensemble average as the starting point when processing the low, moderate, and severe turbulence data sets, respectively. When compared with the SI processing, reductions of 9.4%, 8.4%, and 6.6% in MSE were observed. Reconstruction times for each frame averaged 1min for bispectrum, 10 min for MFBD, and 35 min for the B-MFBD hybrid. These results are summarized in Table 3. The hybrid B-MFBD improves on the performance of the standard MFBD estimator and approximates the performance of the SI estimator. The significant characteristic of the B-MFBD estimator is a 4% to 6% improvement on the variance of the estimate over SI and MFBD applied by themselves. The plots for MSE versus the number of frames are shown in Fig. 21. Gaussian (a) Case 1: low turbulence CN2=2.25×1014m(2/3), (b) Gaussian Case 2: moderate turbulence CN2=3.75×1014m(2/3), and (c) Gaussian Case 3: severe turbulence CN2=5.25×1014m(2/3). Further efforts to explore the performance characteristics of this technique used a value for the gradient of the cost function as the exit criteria for the L-BFGS optimization routine. A value of g=10,000 was set, and the B-MFBD estimator was allowed to run until convergence produced the object estimate as seen in Figs. 22(c), 23(c), and 24(c). Because processing times are both long and unpredictable, this technique is unlikely to see application in an embedded system.

Table 3

Mean and variance for multiframe blind deconvolution (MFBD), bispectrum, and the hybrid B-MFBD methods for low turbulence CN2=2.25×10−14  m(−2/3), moderate condition CN2=3.75×10−14  m(−2/3), and severe turbulence CN2=5.25×10−14  m(−2/3). Processing was accomplished with Nf=15, Nz=15, and the exit criteria for the optimization routine set at 40 iterations.

MFBDBispectrumB-MFBD
μσμσμσ
Low431.032.939521.035832.9
Moderate617.094.5548.528.5502.632.9
Severe779.0103.5710.834.6663.534.4

Fig. 21

MSE versus Nf for MFBD, bispectrum, and B-MFBD method using bispectrum to provide an initial estimate for processing with the MFBD estimator.

OE_53_4_043109_f021.png

Fig. 22

Using speckle imaging (SI) to jump start MFBD. (a) Low turbulence condition input image sample, (b) initial estimate from SI as MFBD starting point, and (c) B-MFBD output when run to convergence.

OE_53_4_043109_f022.png

Fig. 23

Using SI to jump start MFBD. (a) Moderate turbulence condition input image sample, (b) initial estimate from SI as MFBD starting point, and (c) B-MFBD output when run to convergence.

OE_53_4_043109_f023.png

Fig. 24

Using SI as the initial MFBD estimate. (a) Severe turbulence condition input image sample, (b) initial estimate from SI as MFBD starting point, and (c) B-MFBD output when run to convergence.

OE_53_4_043109_f024.png

7.

Conclusions

The performances of an SI estimator and an unconstrained optimization-based MFBD estimator were compared in terms of the MSE between the object reconstructed using the method and a diffraction-limited image. Three 1000-image data sets of a single image distorted by low, moderate, and severe turbulence that includes anisoplanatic effects were applied to both methods and their MSE performance was evaluated. At normal illumination levels, a wide range of turbulence cases would be well served by either SI or MFBD. Point performance estimates, using a data set of 1000 simulated turbulence-corrupted images, indicate that speckle-imaging techniques reduce the MSE 46%, 42%, and 47% on average for low, moderate, and severe turbulence conditions, respectively, using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides 40, 29, and 36 improvements in MSE on average under the same conditions. But the speckle techniques are significantly faster. The comparison is repeated under decreasing illumination conditions (less than 100 photons per pixel) where improvements of 39, 29, and 27 are available using SI methods and 25 input frames and 38, 34, and 33, respectively, for the MFBD method and 150 input frames, respectively, under the assumption that the phase errors can be characterized as a Gaussian distribution. For all simulated turbulence cases, significant reductions were observed with as few as two input images. For the Poisson case, significant results were achieved with as few as 50 frames for MFBD and 2 frames for speckle. Processing with field data with the MFBD produced qualitatively similar results over a modest range of atmospheric turbulence. The B-MFBD estimator using the SI estimator to provide the MFBD processing algorithm with an initial starting point produces modest MSE performance improvement over SI alone using 15 input frames, 15 Zernike terms and 40 iterations of the MFBD algotithm. The B-MFBD method produced more substantial improvement when compared with MFBD processing that used a spatial average of input frames as the optimizer’s starting point when using similar inputs and exit criteria. Qualitative improvements were achieved by letting the optimization run until convergence, but this technique may not be suitable for embedded system implementation because of long, unpredictable processing time.

Appendices

Appendix A:

Gradient of the Gaussian Noise Model Log-Likelihood Cost Function

With respect to the pixel intensities, the gradient of the Gaussian log-likelihood function20 can be represented as

Eq. (21)

oL[i(x,α)]=2k=1Kxϵχ{dk(x)[ik(x)]}oik(x)
and the derivative of ik(x) with respect to the object is given by

Eq. (22)

oik(x)=oxoh(xxo)o(xo)=h(xxo),
where h(xixo) is the incoherent PSF.

The gradient with respect to the Zernike coefficients is

Eq. (23)

αL[i(x,α)]=2k=1Kxϵχ{dk(x)[ik(x)]}αik(x,αk).

Eq. (24)

αik(x,αk)=α[xoh(xxo)o(xo)]=xoα[h(xxo)o(xo)].

Since the object o(xo) is constant with respect to the Zernike coefficients, Eq. (24) reduces to

Eq. (25)

xoα[h(xxo)o(xo)]=xo{α[h(xxo)]}o(xo).

From Eqs. (11)–(13), we can express the derivative of the incoherent PSF with respect to the αks in terms of the coherent PSF,

Eq. (26)

α[h(xxo)]=α|g(x)|2=αg(x)g*(x)=[αg(x)]g*(x)+g(x)[αg*(x)],
where () represents the complex conjugate of the function in parentheses. Equation (26) can be represented as

Eq. (27)

2R{[αg(x)]g*(x)},
where R() takes the real part of a complex value.

Eq. (28)

αg(x)=α[xpAejθ(xp)ej2πN(xxp)]=xp[jθ(xp)αAejθ(xp)ej2πN(xxp)],

Eq. (29)

θ(xp)α=αϕk(xp)αk=ϕk(xp).

Combining the results of Eqs. (29), (28), (27) yields the final result

Eq. (30)

αik(x,αk)=xo2R({xp[jϕk(xp)Aejθ(xp)ej2πN(xxp)]}g*(x))o(xo).

Appendix B:

Gradient of the Poisson Noise Model Log-Likelihood Cost Function

Taking the derivative with respect to the pixel intensities the gradient of the Poisson log-likelihood function20 described in Eq. (19) can be represented as

Eq. (31)

oLPoisson[i(x,α)]=k=1Kxϵχ[dk(x)ik(x,αk)1]fik(x,αk).

With respect to the Zernike coefficients the gradient of the Poisson log-likelihood function can be represented as

Eq. (32)

αLPoisson[i(x,α)]=k=1Kxϵχ[dk(x)ik(x,αk)1]αik(x,αk),
where the /α[i(x,α)] was shown in Eq. (30).

References

1. 

S. Baiottiet al., “Advances in UAV data links: analysis of requirement evolution and implications on future equipment,” in Proceedings of RTOSCI Symposium on Walfare Automation: Procedures and Techniques for Unmanned Vehicles, B10-1 –B10-14 (2000). Google Scholar

2. 

M. C. RoggemannB. M. Welsh, Imaging Through Turbulence, CRC Press, Boca Raton, Florida (1996). Google Scholar

3. 

J. P. BosM. C. Roggemann, “Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,” Proc. SPIE, 8056 805603 (2011). http://dx.doi.org/10.1117/12.884093 Google Scholar

4. 

R. G. PaxmanT. J. SchulzJ. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A, 9 (7), 1072 –1085 (1992). http://dx.doi.org/10.1364/JOSAA.9.001072 JOAOD6 0740-3232 Google Scholar

5. 

G. R. AyersJ. C. Dainty, “Iterative blind deconvolution method and its applications,” Opt. Lett., 13 (7), 547 –549 (1988). http://dx.doi.org/10.1364/OL.13.000547 OPLEDP 0146-9592 Google Scholar

6. 

A. W. LohmannG. WeigeltB. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt., 22 (24), 4028 –4037 (1983). http://dx.doi.org/10.1364/AO.22.004028 APOPAI 0003-6935 Google Scholar

7. 

G. R. AyersM. J. NorthcottJ. C. Dainty, “Knox-thompson and triple-correlation imaging through atmospheric turbulence,” J. Opt. Soc. Am. A, 5 (7), 963 –985 (1988). http://dx.doi.org/10.1364/JOSAA.5.000963 JOAOD6 0740-3232 Google Scholar

8. 

C. J. Carrano, “Speckle imaging over horizontal paths,” Proc. SPIE, 4825 109 –120 (2002). http://dx.doi.org/10.1117/12.453519 PSISDG 0277-786X Google Scholar

9. 

P. E. Curtet al., “Real-time embedded atmospheric compensation for long-range imaging using the average bispectrum speckle method,” Proc. SPIE, 7244 724404 (2009). Google Scholar

10. 

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am., 62 (1), 55 –59 (1972). http://dx.doi.org/10.1364/JOSA.62.000055 JOSAAH 0030-3941 Google Scholar

11. 

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J., 79 (6), 745 –754 (1974). http://dx.doi.org/10.1086/111605 ANJOAA 0004-6256 Google Scholar

12. 

T. J. Schulz, “Multiframe blind deconvolution of astronomical images,” J. Opt. Soc. Am. A, 10 (5), 1064 –1073 (1993). http://dx.doi.org/10.1364/JOSAA.10.001064 JOAOD6 0740-3232 Google Scholar

13. 

D. L. Frieds, “Limiting resolution looking down through the atmosphere,” J. Opt. Soc. Am., 56 (10), 1380 –1384 (1966). http://dx.doi.org/10.1364/JOSA.56.001380 JOSAAH 0030-3941 Google Scholar

14. 

J. C. WyngaardY. IzumiS. A. Collins, “Behavior of the refractive-index-structure parameter near the ground,” J. Opt. Soc. Am., 61 (12), 1646 –1650 (1971). http://dx.doi.org/10.1364/JOSA.61.001646 JOSAAH 0030-3941 Google Scholar

15. 

J. E. DennisR. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16), Soc for Industrial & Applied Math, Prentice-Hall, Inc., Englewood Cliffs, New Jersey (1996). Google Scholar

16. 

D. L. Fried, “Anisoplanatism in adaptive optics,” J. Opt. Soc. Am., 72 (1), 52 –52 (1982). http://dx.doi.org/10.1364/JOSA.72.000052 JOSAAH 0030-3941 Google Scholar

17. 

J. P. BosM. C. Roggemann, “Technique for simulating anisoplanatic image formation over long horizontal paths,” Opt. Eng., 51 (10), 101704 (2012). http://dx.doi.org/10.1117/1.OE.51.10.101704 OPEGAR 0091-3286 Google Scholar

18. 

G. E. ArcherJ. P. BosM. C. Roggemann, “Mean squared error performance of MFBD nonlinear scene reconstruction using speckle imaging in horizontal imaging applications,” Proc. SPIE, 8399 83990Q (2012). http://dx.doi.org/10.1117/12.920801 PSISDG 0277-786X Google Scholar

19. 

A. V. SergeyevM. C. RoggemannC. Demars, “Near the ground laser communication system: anisoplantic studies based on the PSF measurements,” (2011). http://dx.doi.org/10.1117/12.882576 Google Scholar

20. 

P. A. BillingsM. R. StriblingB. Stribling, “Mitigating turbulence-induced image blur using multi-frame blind deconvolution,” in AMOS Technical Conf., 506 –512 (2001). Google Scholar

Biography

Glen Archer received his PhD in electrical engineering from Michigan Technological University in December of 2013. He received his BS in electrical engineering from Texas Tech University in 1986 and received a commission as a second lieutenant in the United States Air Force. In 2001, he retired from the Air Force to take a position in the Electrical and Computer Engineering Department at Michigan Tech, where he currently serves as a principal lecturer and the associate chair. His research interests include image processing and engineering education.

Jeremy P. Bos received his PhD in electrical and computer engineering from Michigan Technological University in August of 2012. Before returning to pursue his doctorate at Michigan Tech, he worked in the automotive and defense industries for nearly 10 years. During this time, he became a professional engineer (2006) and earned his MS in electrical engineering from Villanova University (2003). He received his BS in electrical engineering from Michigan Technological University in 2000. He is currently a postdoctoral fellow at the Air Force Research Laboratory under the National Research Council’s Research Associateship Program. His research interests are in the areas of atmospheric optics, image processing, and machine intelligence.

Michael C. Roggemann is a professor of electrical engineering at Michigan Tech. He is coauthor of the book “Imaging Through Turbulence” and has authored or coauthored over 60 journal articles and over 50 conference papers. Roggemann is a member of the IEEE, and is a fellow of both the Optical Society of America and SPIE. He was also briefly affiliated with Boeing Corporation, where he served as a senior research scientist from 2002 to 2005; he was also a technical fellow at Boeing. Roggeman was an associate professor of engineering physics at the Air Force Institute of Technology, Wright-Patterson AFB in Ohio, from 1992 to 1997. He is an honorably retired Air Force officer at the rank of major.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Glen E. Archer, Jeremy P. Bos, and Michael C. Roggemann "Comparison of bispectrum, multiframe blind deconvolution and hybrid bispectrum-multiframe blind deconvolution image reconstruction techniques for anisoplanatic, long horizontal-path imaging," Optical Engineering 53(4), 043109 (29 April 2014). https://doi.org/10.1117/1.OE.53.4.043109
Published: 29 April 2014
Lens.org Logo
CITATIONS
Cited by 15 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Turbulence

Image processing

Photons

Deconvolution

Image restoration

Optical engineering

Imaging systems

Back to Top