The goal of this article is to use a parameterized, multiframe blind deconvolution (MFBD) technique to reconstruct an object estimate from a set of simulated anisoplanatic images, and examine the mean square error (MSE) performance of the estimator as the parameters are varied. This article examines estimator performance as the number of frames used in the estimation is varied, and as the number of Zernike polynomial coefficients used to characterize the phase term of the point spread function are varied under the assumption of both Gaussian and Poisson noise distributions.
Every optical system using light that has propagated any appreciable distance through the atmosphere will suffer, to some degree, from turbulence induced phase aberrations. In addition to phase errors at the aperture, light propagating over longer distances or through stronger turbulence, will cause images to suffer from anisoplanatic, and possibly scintillation effects as well. Often the image blur induced by these phase aberrations is the limiting factor in the ability to recognize details of objects in the scene.
Under isoplanatic conditions, the light coming from all points in the scene can be assumed to experience similar turbulence induced aberrations. The isoplanatic angle is the angular separation between point sources for which the phase changes at the aperture can be considered to be significantly decorrelated. However, in many near-surface surveillance imaging scenarios, it is reasonable to assume that the field of view of the imaging system will subtend an angle wide enough that this assumption will not be valid. In this case, we describe the viewing as anisoplanatic. The longer the optical path length and the stronger the turbulence, the more severe these aberrations become, and the isoplanatic angle decreases. Increasing the size of the aperture will not improve the quality of the image under anisoplanatic conditions. Unless the seeing conditions are very favorable, anisoplanatism will play a role in most practical horizontal imaging situations. Some technique for reducing the effects of anisoplanatism is desired.
A variety of techniques have been devised to correct the turbulence induced phase aberrations in vertical imaging applications. Adaptive optic strategies using wave front sensors to control deformable mirrors have been used in celestial observation systems for many years.1 Techniques that exploit deliberately imposed, known phase diversity23.–4 have also been used with some success. Paxman and Schulz explored this problem by creating phase diversity across multiple speckle images. This technique uses two simultaneous measurements—an in-focus image and another with a known degree of defocus applied before the other measurement is taken.5 This technique is limited to fields of view that do not appreciably exceed the isoplanatic angle existing at the moment the image was captured, require substantial hardware, and divide photons between two detectors. Post-detection processing of wide field of view images captured with short-exposure times is another alternative. Fraser et al. described a technique for point-by-point registration of anisoplanatic speckle images to reduce motion blur and prepare the images for other deconvolution strategies.6 Ayers and Dainty pioneered the application of an iterative blind deconvolution technique to a single image degraded by atmospheric turbulence.7 Their method relied on the enforcement of positivity and finite support constraints on the object estimate. Schulz extended that method to include multiple input images and developed a penalized-maximum-likelihood algorithm to avoid the trivial solution that incorrectly concludes that the optical system’s point spread function (PSF) is a Dirac delta function and the most likely object estimate is the observed turbulent image.8 Hybrid hardware–software strategies offer the potential to produce on-the-fly estimates of scenes, but require substantial investment in both hardware and software to produce results.9 Bos and Roggemann10 have reported the use of software reconstruction techniques using the bi-spectrum method in nearly real time. The use of these strategies to surveillance imaging is largely unexplored.
This article describes a method of jointly estimating object intensities and imaging system PSFs from simulated anisoplanatic images that have been corrupted by atmospheric turbulence. The image model that forms the foundation of this estimator is that of a linear shift invariant PSF and a deterministic object. It is conjectured that anisoplanatic effects of the turbulent atmosphere are compensated for by the estimator by reconstructing a spatialy averaged PSF. Bos’s11 work using cross-spectrum and bi-spectrum phase reconstructions points to this potential solution. Carrano12 has also published a work in this area that neglects the anisoplanatic effects. This investigation will be the subject of another article. The method developed here is applied to three sets of images with varying levels of turbulence, and the effectiveness is assessed by calculating the MSE between the resulting recovered object and the diffraction limited image.
We find that the MFBD reconstructed objects show significant improvement in MSE compared to the average MSE between all the images in a data set and the associated diffraction limited image. The improvement in MSE was 40% for the low turbulence case, 25% for moderate turbulence, and 36% for severe turbulence case. We also provide an estimate of the optimum number of images and Zernike coefficients to use in the future work with MFBD reconstructions.
The remainder of this article is organized as follows. In Sec. 2, we discuss the horizontal imaging problem and briefly describe the simulation that produced the data sets used in the study. In Sec. 3, the object recovery methods for the Gaussian case is described followed by the Poisson case. In Sec. 4, the results of both the Gaussian and Poisson case reconstructions are presented. Finally, some thoughts on processing and conclusions regarding the technique are provided in Sec. 5.
We now describe the MFBD algorithm for the Gaussian and Poisson noise models. In MFBD, the input is a set of measured noisy and turbulence corrupted images. In a stack of turbulence corrupted, but measurement noise-free images, the ’th image can be described as the convolution of an unchanging object in space convolved with the PSF of the optical system . Mathematically, this can be expressed as13
The PSF is the modulus squared of the coherent impulse response , which is in turn the inverse Fourier transform of the generalized pupil function. Mathematically, these relationships are given by14 We will assume that the simulated images are a series of short-exposure images where the object in the scene remains constant, but the phases associated with each PSF are random in each image frame in the stack. This lets us express the generalized pupil function as a function of both the spatial frequency and the vector of Zernike coefficients
In nonblind deconvolution problems, the data collected, , is used with a known PSF to determine . In blind deconvolution, we are given and use that information to estimate both the object, and the PSF jointly. There is no closed form solution to the problem of jointly estimating an object and the aberration parameters for each image frame. Hence, an iterative approach is needed to find the object pixel intensities and Zernike coefficients that are most likely to have resulted in the simulated data for each image. In the next section, we describe two such approaches, one based on a Gaussian noise model and other based on a Poisson noise model.
It is common to simulate the effects of the turbulent atmosphere by placing layers of uniform turbulence between the object and the imaging system. The data set consisting of 1000 simulated turbulent images used in this article was created using an image common in the literature. Five Kolmogorov phase screens were generated. The image was propagated over a distance of 1000 m. Light from each object pixel was projected through the phase screens, in turn, at 200-m separations using a geometric optics approach to account for the effects of anisoplanatism. Phase errors accumulating from each screen are combined at the pupil to create a turbulence degraded PSF. Each of the PSFs is then scaled by the object pixel intensities to create a turbulence corrupted image for low, moderate, and severe turbulence conditions. Parameters for the simulated imaging system include a 10-cm aperture with a detector and a 0.7-mm pixel pitch. A fuller description of the simulator used to create this data set is available in Ref. 11.
For the conditions simulated here, a single pixel in the simulated imaging system captures 2.79 μrad. Expressing the values for the low, medium, and severe turbulence conditions of the simulation, we see in Table 1 that the isoplanatic patch covers 4, 3, and 2 pixels in the simulated imaging system (Fig. 1).
Atmospheric simulation turbulence conditions.
|Atmospheric turbulence parameters|
|Severity condition||CN2 (m(−2/3))||r0 spherical case (cm)||θ0 (μrad)||θ0 (pixels)|
Gaussian Noise Model MFBD
The natural log of Eq. (9) is taken in order to make the analysis more tractable, resulting in a summation rather than products and neglecting a constant term, yields the log-likelihood function
Although an analytic form of the Hessian is not required, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) optimization used to maximize the likelihood function Eq. (10) is more efficient if an analytic form of the gradient is provided. With respect to the pixel intensities, the gradient of the Gaussian log-likelihood function can be represented as
Poisson Noise Model MFBD
Not all images are taken in full daylight. At low light levels, photon noise may dominate image frames. This is often characterized by a speckled quality of the images. Photon noise in images is described by modeling the number of photons detected in an image frame at each pixel as a Poisson random variable with a mean photon count rate , which is proportional to average pixel intensity. For this simulation, the number of photons detected at each detector pixel is assumed to be an independent, Poisson distributed random variable with a mean rate given by a noiseless diffraction-limited image . The random nature of the PSF is neglected. The probability of detecting photons at a specific pixel location is given by
The distribution over the entire set of pixel locations is given by
As before, taking the natural log yields a modified log-likelihood function
With respect to the Zernike coefficients, the gradient of the Poisson log-likelihood function can be represented as
Our simulations assume that the propagation occurs over horizontally homogeneous conditions with both the object and the imaging system immersed in a turbulent atmosphere. Furthermore, we assume that the height above ground does not vary significantly and is a constant over the propagation path.15 We assume that the simulated data has effectively frozen the turbulence at the moment the turbulent image is created. Prior to applying the simulated turbulent images to the reconstruction algorithm, they must be recentered as tilt is not estimated in the MFBD algorithm. This was accomplished by using a correlation filter to compare each image in the stack to the ensemble average image and then shifting the turbulent image to recenter it. In order to reduce the aliasing associated with the finite support, each frame of the data set was preprocessed to pad the centered image by replicating the edges of the image outward and then adding a border of 0. The abrupt transitions artificially introduced by the padding process can result in high spatial frequency components that are sometimes mitigated by the application of spatial filters. Using 15 frames in the reconstruction, the image stack was padded and then a Tukey16 tapered filtered was applied to the image. Both the tapered and untapered images were applied to the estimator. The elapsed processing time and MSE of the reconstructed object were determined with the estimator limited to 20 iterations. The amount of padding for subsequent processing was determined by examining the effect on the processing time and the MSE as the amount of padding was varied. All subsequent processing was accomplished by padding each recentered turbulent image but without tapering it. The images are applied to the estimator with eight replicated pixels followed by five 0 pixels at the margins of each image, bringing the total size of the image to as seen in Fig. 2. These results are summarized in Table 2.
Selection of image padding.
|Padding, filter versus MSE and elapsed time|
|Pixels repeat + 0||w/o filter||w/filter||Total image size N×N|
|MSE||Time (s)||MSE||Time (s)|
The cost functions in Eqs. (10) and (15) are parameterized by the object pixel intensities and aberration coefficients, and are applied to a nonlinear optimization MATLAB routine to find the object and aberration coefficients most likely to have produced the images that were simulated in the data set. The intensities at each pixel location in each image are vectorized. The vectorized initial guesses for each image’s Zernike polynomial coefficients are appended to the end of the vector of image intensities formatted as shown in Table 3. We are jointly processing all images and all Zernike coefficients, thus for a data set of , images, using Zernike polynomial terms, there will be parameters that must be jointly estimated. The optimization routine will return a vector of the reconstructed object’s intensities followed by the estimate of the Zernike coefficients for each frame of the input stack as shown in Table 4. Optimization over such a large parameter space is impractical using conventional optimization techniques. To make the optimization tractable, we use the L-BFGS method to process the images. L-BFGS is a quasi-Newtonian “hill-climbing” technique that begins with an initial guess at a solution for and then proceeds along a line in the direction pointed to by the gradient of the objective function evaluated at each pixel location. One of the drawbacks to searching along the gradient is the need for the Hessian to prevent the estimate from hopping back and forth across the sides of the valley. The limited memory form of the BFGS does not require an explicit expression for . It estimates the value of the Hessian matrix by maintaining the last few updates of and . A Quasi-Newtonian line search optimization can quickly converge to a local minimum for cost functions, but there is no guarantee that the minimum is a global minimum. In processing the initial object estimate, the average of all the frames used in the trial was applied to the estimator.
Input object and Zernike coefficients. The current stack of images being processed is spatially averaged and stripped into a vector with the intensities at the beginning and the initial guess at each image’s Zernike coeffiecients at the end.
|Estimator input parameters|
Vectorized reconstructed object and Zernike coefficients. The estimate is returned as a vector with the estimated object pixel intensities at the beginning and the estimate of each input image’s Zernike coeffiecients at the end.
|Output object estimate and Zernike coefficient vector|
It is necessary to provide the estimator with a stopping criterion. Using the low condition data set with the number of frames set to 5, 15, and 35, the log-likelihood function value was monitored during reconstruction. The results are shown in Fig. 3. Regardless of the number of frames used in the reconstruction, assuming one call to the likelihood function per iteration, the reconstruction is essentially complete after 10 iterations. For the image reconstruction processing in this article, the number of iterations was limited to 25.
Reconstruction processing begins by selecting images from the complete data set in groups of , 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 23, and 25 frames and then incrementing that group size through the entire data set. At each increment an initial guess at the object and phase parameters is provided to the optimization routine. This initial guess is always the average of the frames being used in the estimate. The Zernike coefficients provided as an initial guess are random Gaussian numbers with a mean of 0.5 and unity variance.
The recovered image was compared to the diffraction limited image and the MSE determined. The MSE is averaged over all pixels and determined as follows:
Number of Zernike Terms Needed in the Optimization Process
Recovering a common object estimate from the stack of degraded images is computationally intense regardless of the method used. Using more Zernike polynomial terms require more variables to be estimated, and longer processing will result. Figure 4(a) shows the processing time required for a fixed number of frames when the number of Zernike polynomial terms is varied. Figure 4(b) shows how the processing time varies for a fixed number of Zernike coefficients as the number of input images is varied. Of greater impact on processing time is the number of images used to recover the object. For a set of , images, the number of variables increases as . Previous work indicated that 15 images and 35 Zernike terms would provide a good estimate of the object.17 Further exploration over a larger data set yielded similar results and additional insight into the estimator’s performance. With the number of images set to 50 in order to reduce the influence of the number of images on the outcome, the number of Zernike coefficients was varied from 10 to 100 terms for all three turbulence conditions as shown in Fig. 5. For all three turbulence cases of , additional terms beyond 60 do not significantly improve the MSE.
Gaussian Noise Model
Case 1 low condition
Examining Fig. 6, we see that at and thereafter the estimator can be expected to perform better than the average MSE for the simulated image which was 673. Marginal improvement in MSE declines at , reaching a maximum improvement of approximately 40% over the average MSE over the entire data set. However, if processing time is not of consequence, the MSE and its standard deviation continues to improve as additional images are added. Examining the results of the Zernike term evaluation shown in Fig. 5, 60 Zernike coefficients were used to characterize the PSFs and the results are compared to the estimator’s performance using 30 Zernike terms in Fig. 6. The use of additional Zernike terms does not add as much processing time as using additional frames, but each reconstruction will take longer as additional Zernike terms are used to characterize the PSFs. The incremental improvement in MSE is not worth the additional time consumed. The diffraction limited image is compared to a sample of the simulated turbulent image data set images and a sample reconstructed object in Fig. 7.
Case 2 moderate condition
Figure 8 shows (a) the diffraction limited image, (b) a sample recorded image, and (c) a sample reconstructed object. We see in Fig. 9 that the MFBD estimator will consistently perform on average better than the average image’s error as soon as the size of the processing window reaches two frames. At and thereafter the estimator can be expected to perform better than the average MSE of the simulated image. The improvement in MSE available by including additional input frames hits a maximum of approximately 25% of full scale at . Neither the MSE nor the standard deviation improves significantly as additional input images are added to the stack. As a consequence of the results of the Zernike term sweeps discussed above, the estimator was run using 60 Zernike coefficients to characterize the PSFs. The results are compared in Fig. 9. The use of additional Zernike terms does not incur as large a computational penalty as that associated with adding additional frames but each reconstruction will take longer. The incremental improvement in MSE is not worth the additional processing time.
Case 3 severe condition
Again we see in Fig. 10 that the estimator requires at least two input frames to reliably produce an estimate of the object that has a lower MSE than 1165, the average simulated image’s MSE. At and thereafter the estimator can be expected to perform better than the average MSE of the recorded image. The improvement in MSE available by including additional input frames hits a maximum of approximately 36% of full scale at and neither the MSE nor the standard deviation improves significantly as additional input images are added to the processing stack. Figure 11 shows (a) the diffraction limited image, (b) a sample recorded image, and (c) a sample reconstructed object. Based on the results of the Zernike term sweeps discussed above, the estimator was run using 60 Zernike coefficients to characterize the PSFs. The results are compared in Fig. 10. The use of additional Zernike terms does not incur as large a computational penalty as that associated with adding additional frames but each reconstruction will take longer. As shown, the incremental improvement in MSE is not worth the additional processing time.
Poisson Noise Model Mean Photon Rate
Each set of 1000 turbulent images representing the three turbulence cases was used to generate a set of speckle images with a mean photon count per image of . Each set of images was processed using the MFBD methods described above using the cost function and gradient described in Eqs. (15)–(17).
Case 1 low condition
Examining Fig. 12, we see that on average, MFBD performance is less than the input images until 50 input frames are used in each reconstruction. At and thereafter the estimator can be expected to produce an estimate that has a value lower than 2095, the average MSE of the images in the simulated data set. Marginal improvement in MSE continues as additional frames are added to the image stack reaching a maximum of about 38% improvement over the average MSE across the data set. However, if processing time is not of consequence, the MSE and its standard deviation continues to improve as additional images are added to the stack of images presented to the estimator, so additional improvement in the quality of the image is available. The diffraction limited image is compared to a sample simulated turbulent image and a sample reconstructed object as shown in Fig. 13.
Case 2 moderate condition
Figure 14 shows (a) the diffraction limited image, (b) a sample recorded image, and (c) a sample reconstructed object using 175 frames to estimate the object. Again we see in Fig. 15 that the MFBD estimator will not perform on average any better than 2285, the average simulated turbulent image error, until the number of images processed reaches 50 frames. At and thereafter the estimator can be expected to perform better than the average MSE of the simulated image reaching a maximum of about 34% improvement. The marginal improvement in MSE available by including additional input frames begins to decline at about and neither the MSE nor the standard deviation improves significantly as additional input images are processed.
Case 3 severe condition
Again we see in Fig. 16 that the MFBD estimator will not perform on average any better than the average image error until the number of images offered to the estimator reaches 50 frames. At and thereafter the estimator can be expected to perform better than the average MSE (2456) of the simulated image reaching a maximum of about 33%. The marginal improvement in MSE available by including additional input frames begins to decline at about and neither the MSE nor the standard deviation seems to improve significantly from there as additional input images are added to the stack. Figure 17 shows (a) the diffraction limited image, (b) a sample recorded image, and (c) a sample reconstructed object.
The performance of an unconstrained optimization-based MFBD estimator was evaluated in terms of the MSE between the reconstructed object and a diffraction limited image. Three 1000-image data sets of a single image distorted by low, moderate, and severe turbulence were generated using a horizontal imaging simulator that includes anisoplanatic effects. The data sets were then applied to the estimator and its MSE performance evaluated. If a hardware implementation was to be produced with a fixed, or limited set of operator options, a wide variety of turbulence cases would be well served by a selection of 14 images and 30 polynomial terms for use with the estimator. Point performance estimates, using a data set of 1000 simulated turbulence corrupted images, indicate that the algorithm is capable of producing 40%, 25%, and 36% improvements in MSE for low, moderate, and severe-anisoplanitic turbulence cases, respectively, under the assumption that the phase errors can be characterized as a Gaussian distribution. For all simulated turbulence cases, significant reductions were observed with as few as two input images. For the Poisson case, significant results were achieved with as few as 50 frames, but 175 frames would be a reasonable place to design a system that would be able to cope with a variety of atmospheric turbulence and light levels. For further research, it may be possible to speed up the reconstruction by providing a better initial guess at the object. Simulated annealing techniques could be used to perturb the estimate away from a local minimum and may prove to be an effective answer to local minimum trapping.
B. J. Thelenet al., “Maximum a posteriori estimation of fixed aberrations, dynamic aberrations, and the object from phase-diverse speckle data,” J. Opt. Soc. Am. A 16(5), 1016–1025 (1999).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.16.001016Google Scholar
B. J. Thelenet al., “Overcoming turbulence-induced space-variant blur by using phase-diverse speckle,” J. Opt. Soc. Am. A 26(1), 206–218 (2009).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.26.000206Google Scholar
R. G. PaxmanT. J. SchulzJ. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.9.001072Google Scholar
D. FraserG. ThorpeA. Lambert, “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A 16(7), 1751–1758 (1999).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.16.001751Google Scholar
M. A. VorontsovG. W. Carhart, “Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images,” J. Opt. Soc. Am. A 18(6), 1312–1324 (2001).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.18.001312Google Scholar
J. P. BosM. C. Roggemann, “Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,” Proc. SPIE 8056, 805603 (2011).http://dx.doi.org/10.1117/12.884093Google Scholar
J. P. BosM. C. Roggemann, “Technique for simulating anisoplanatic image formation over long horizontal paths,” Opt. Eng. 51(10), 101704 (2012).OPEGAR0091-3286http://dx.doi.org/10.1117/1.OE.51.10.101704Google Scholar
J. W. Goodman, Introduction to Fourier Optics, McGraw Hill, Boston, Massachusets (1996).Google Scholar
J. C. Wyngaardet al., “Behavior of the refractive-index-structure parameter near the ground,” J. Opt. Soc. Am. 61(12), 1646–1650 (1971).JOSAAH0030-3941http://dx.doi.org/10.1364/JOSA.61.001646Google Scholar
Glen E. Archer is pursuing a PhD in electrical engineering from Michigan Technological University. He received his BS in electrical engineering from Texas Tech University in 1986 and received a commission in the United States Air Force. In 2001, he retired from the Air Force to take a position in the Electrical and Computer Engineering Department at Michigan Tech, where he currently serves as a principal lecturer and the associate chair. His research interests include image processing and engineering education.
Jeremy P. Bos received his PhD in electrical and computer engineering from Michigan Technological University in August 2012. Before returning to pursue his doctorate at Michigan Tech, he worked in the automotive and defense industries for nearly 10 years. During this time, he became a professional engineer (2006) and earned his MS in electrical engineering from Villanova University (2003). He received his BS in electrical engineering from Michigan Technological University in 2000. He is currently a postdoctoral fellow at the Air Force Research Laboratory under the National Research Council’s Research Associateship Program. His research interests are in the areas of atmospheric optics, image processing and machine intelligence.
Michael C. Roggemann is a professor of electrical engineering at Michigan Tech. He is coauthor of the book “Imaging Through Turbulence” and has authored or coauthored over 60 journal articles and over 50 conference papers. He is a member of the IEEE, and is a fellow of both the Optical Society of America and SPIE. He was also briefly affiliated with the Boeing Corporation, where he served as a senior research scientist from 2002 to 2005; he was also a technical fellow at Boeing. He was an associate professor of engineering physics at the Air Force Institute of Technology, Wright-Patterson AFB, in Ohio from 1992 to 1997. He is an honorably retired Air Force Officer at the rank of major.