Open Access
20 September 2013 Image deblurring and near-real-time atmospheric seeing estimation through the employment of convergence of variance
Brian J. Neff, Quentin D. MacManus, Stephen C. Cain, Richard K. Martin
Author Affiliations +
Abstract
A new image reconstruction algorithm is presented that will remove the effect of atmospheric turbulence on motion compensated frame average images. The primary focus of this research was to develop a blind deconvolution technique that could be employed in a tactical military environment where both time and computational power are limited. Additionally, this technique can be employed to measure atmospheric seeing conditions. In a blind deconvolution fashion, the algorithm simultaneously computes a high resolution image and an average model for the atmospheric blur parameterized by Fried’s seeing parameter. The difference in this approach is that it does not assume a prior distribution for the seeing parameter, rather it assesses the convergence of the image’s variance as the stopping criteria and identification of the proper seeing parameter from a range of candidate values. Experimental results show that the convergence of variance technique allows for estimation of the seeing parameter accurate to within 0.5 cm and often even better depending on the signal to noise ratio.

1.

Introduction

Both military and civilian applications have driven a significant amount of research on enhancing the quality of images received from optical sensors degraded by the effects of diffraction. When imaging through atmospheric turbulence, the representation of the remote scene is degraded by the diffraction of light from neighboring areas. Various techniques to illuminate and capture the remote scene can be employed by a sensor, and the chosen technique is often driven by the intended application of the sensor as shown by the following examples. In this article, we focus on applications associated with military targeting sensors.

Current state-of-the-art military targeting sensors typically detect light in the infrared spectrum. This allows for numerous advantages, such as easier separation of man-made versus natural targets as well as the ability to obtain high quality images in low light conditions. Another class of sensor known as a LAser Detection And Ranging (LADAR) system captures a target’s reflection from a laser illumination source. This class of sensor is receiving significant attention as improvements to the technology are made. In addition to the ability to image in low light environments, some classes of LADAR sensors also allow for the collection of three dimensional (3-D) images. Regardless of the sensor type, there are substantial benefits to image deblurring in military applications. Benefits include minimizing the risk of misidentifying a target and perhaps accidentally inflicting damage to nonhostile targets. Ultimately, the algorithm employed to remove atmospheric turbulence effects may depend on the type of sensor since algorithm design is commonly based on statistical models of the received data in addition to processing time limitations.

Due to various physical phenomena, images collected by each class of sensor may have differing noise distributions. For instance, many traditional imaging sensors collect images from incoherent illumination sources where the intensity closely follows a Poisson distribution due to the random photon arrival rate at the detector. The images received by a LADAR sensor often have more of a negative binomial distribution due to the constructive and destructive interference associated with the highly coherent illumination source.1 In some cases, the electronics used to capture an image may drive the received images to have more of a normal distribution. Regardless of the class of sensor and associated noise distribution, a common technique for improving the signal to noise ratio (SNR) of the received image is the process of multiple frame averaging.2

Many imaging systems currently employed or in development have high enough frame rates to capture and produce multiple frame image ensembles consisting of 10 to perhaps hundreds of images over a short period of time. With proper registration, the summation of these images will tend to average out the effects of noise and thus improve the overall SNR. This technique is commonly required in situations where the number of photons received from a target is low, as in the case of imaging stellar targets or remote scenes at long slant ranges through the atmosphere. Assuming proper registration, the imaging sensor can commonly be described as shift invariant. This allows us to model the received image as the convolution between the remote scene and the point spread function (PSF). The frequency representation of the PSF in an average sense is the product of the diffraction limited optical transfer function (OTF) and the atmospheric blur.2 Finally, when using motion compensated frame average (MCFA) images comprised of the summation of numerous short-exposure frames, the atmospheric blur can accurately be described using the short-exposure OTF.3 A key advantage associated with the employment of the average short-exposure model is that the entire blurring function is reduced to a single unknown parameter (Fried’s seeing parameter—r0) which this research will focus on accurately identifying.

Current military targeting sensors employed in a tactical environment do not account for the effects of atmospheric blur. Reasons for this include the fact that blind deconvolution algorithms are often computationally expensive and time intensive. Additionally, adaptive optics are often too complex to employ in a highly dynamic environment. Deconvolution algorithms such as the Richardson–Lucy (RL) deconvolution algorithm where the PSF is known are generally accepted to perform better than blind-deconvolution algorithms where the PSF is unknown. However, in a tactical environment where the targeted scene and imaging sensor position are continually changing, it is practically impossible to have prior knowledge of the PSF.

Considerable work has been accomplished in blind deconvolution for incoherently illuminated remote scenes.411 The shear magnitude of this research supports its importance for a variety of image processing disciplines. The problem we are attempting to solve was previously addressed by MacDonald.2,12 This work by MacDonald focused on undoing the distorting effects of atmospheric turbulence in short exposure, partially coherent imaging of remote scenes at long slant ranges. Key contributions of the algorithm originally presented by MacDonald are as follows. A method was developed for estimating both an enhanced remote scene as well as an average OTF parameterized by the atmospheric seeing parameter where the noise was dominated by the constructive and destructive interference caused by the partially coherent illumination source. Although the effort was originally focused on providing an enhanced image, an equally important contribution was the ability to estimate Fried’s seeing parameter where scintillometry or other measurement techniques were unavailable or impractical. Finally, the algorithm was tailored toward sensors employed in a tactical environment where both processing capability and available time are limited. For these reasons, we will compare our results to MacDonald’s work as his work represents the only prior attempt to solve the problem of parameterized blind deconvolution to recover atmospheric seeing.

The organization of this article is as follows. A brief description of the system model to include noise and atmospheric turbulence effects on the received image will be provided in Sec. 2. In Sec. 3, we will discuss various techniques for removing atmospheric blur maximum a priori (MAP) estimator developed for blind estimation of r0.12 Additionally, a novel technique will be presented which improves upon the ability to blindly estimate r0. A performance comparison of the two blind algorithms on both fully and partially illuminated simulated scenes distorted by various noise distributions will be provided in Sec. 4. Finally, the results will be validated with experimental data as shown in Sec. 5.

2.

Background

Due to numerous physical phenomena, the images captured by a sensor intended to represent a remote scene have imperfections. First, we know that optical imperfections with the system can cause the diffraction of light to neighboring areas. When operating a sensor within the atmosphere, optical imperfections can commonly be binned into those directly related to the manufacturing of the sensor and those that result from atmospheric turbulence. Second, the received image is generally further degraded by the effects of noise. This noise can also result from numerous sources such as read out noise, thermal noise, and noise associated with the illumination source.

The total PSF or spatial impulse response of an optical sensor accounts for the diffraction effects directly attributed to the sensor optics, hopt, and those that can be attributed to atmospheric turbulence, hatm. This total PSF, htot is the convolution of the two primary components as shown in Eq. (1) in one-dimensional (1-D), where x is the spatial position or image pixel location.

Eq. (1)

htot(x)=hopt(x)×hatm(x).

The impulse response of the optics can be computed by conducting a propagation experiment using known sensor parameters as shown in Ref. 1. In frequency space, the total OTF, Htot, the optical OTF, Hopt, and atmospheric OTF, Hatm, are simply the Fourier transforms of their respective PSFs, and the convolution operator is replaced by the multiplication operator

Eq. (2)

F[htot(x)]=Htot(ν)=Hopt(ν)Hatm(ν),
where ν is the spatial frequency.

As a general rule of thumb, when collecting images with exposure times of less than 1/100 of a second, we can assume that the atmosphere through which the remote scene is viewed remains constant.3 Due to the ill-posed nature of blind deconvolution with two-dimensional (2-D) images, it is mathematically impossible to directly solve for the PSF impacting collected images when noise is present. Despite this hurdle, numerous algorithms have been developed to circumvent these mathematical challenges with considerable success by making various assumptions or approximations.411 One technique for simplifying this problem considers a parameterization of the OTF. When using MCFA compilations of images where each individual image has an exposure time of less than 1/100 of a second, the average short-exposure OTF, H¯SE, is reduced to a function of a single unknown parameter, Fried’s seeing parameter, r0, as shown in Eq. (3).

Eq. (3)

H¯SE(ν)=exp{3.44(λ¯fνro)5/3[1(λ¯fνD)1/3]}.

In the mathematical model for H¯SE as a function of spatial frequency, the mean wavelength of light detected is λ¯, f is the focal length of the lens, and D is the aperture diameter of the sensor. For purposes of this research, we will substitute H¯SE as our model for the atmospheric OTF, Hatm. However, the technique developed in Sec. 3.3 could also be demonstrated to work with the simpler long-exposure case where the image frames are averaged without motion compensation. The average long-exposure OTF, H¯LE, is shown in Eq. (4).3

Eq. (4)

H¯LE(ν)=exp[3.44(λ¯fνro)5/3].

Imaging devices exhibit a cutoff frequency dictated by their optical specifications according to Eq. (5). In this diffraction-limited case, the maximum spatial frequency, νmax, can be computed by

Eq. (5)

νmax=Dλf,
where λ is the wavelength of the light of interest. When there are benefits to using long-exposure imaging in certain scenarios such as astrophotography, the loss of frequency content is often an undesired side effect. In a tactical military scenario or any other dynamic environment, it would be impractical if not impossible to point a sensor long enough at a target to warrant the use of long-exposure imaging. Figure 1 compares the frequency response for a sensor that is diffraction limited to the frequency response considering a long- and short-exposure OTF for two levels of atmospheric seeing. It is clearly evident that, provided the scenario allows, there are inherent benefits with respect to frequency content when using properly registered short-exposure images. The parameters for this demonstration were chosen to match those that will be employed to obtain the experimental results listed in Sec. 5 and are listed in Table 1.

Fig. 1

(a) Comparison of the frequency response for this sensor given an r0 of 5 cm. As expected, the short-exposure optical transfer function (OTF) is very close to the diffraction limited OTF since r0 is equal to the aperture diameter. However, the long-exposure OTF reveals a significant attenuation in high frequency content. (b) Comparison of the frequency response for this sensor given an r0 of 2 cm. Higher levels of turbulence yield a higher loss in frequency content for the long-exposure scenario and significant attenuation of high frequency content for the short-exposure scenario.

JARS_7_1_073504_f001.png

Table 1

Optical system specifications.

Parameter nameDefined value
Mean wavelength (λ¯)600 nm
Detector array size582×582
Pixel size8.3×8.6μm2
Sensor focal length (f)1.5 m
Aperture diameter (D)5 cm

3.

Image Deconvolution

The process of deconvolution is commonly performed on collected images, i, that are degraded by a PSF, h, and additive noise, n. The goal of the process is to recover the true representation of the remote scene, o, shown in Eq. (6).

Eq. (6)

(o×h)+n=i.

As previously mentioned, there are a plethora of algorithms designed to aid in this problem. Perhaps the most widely accepted or recognized algorithm for image deconvolution when the collected images follow a Poisson distribution is the RL algorithm.

3.1.

Richardson–Lucy Deconvolution Algorithm

One of the key benefits of the RL algorithm

Eq. (7)

o^(x,y)new=o^(x,y)olduvi(u,v)I(u,v|r0)h(ux,vy|r0),
where

Eq. (8)

I(u,v|r0)=E[i(u,v)|r0]=xyo^(x,y)h(ux,vy|r0),
for a wide variety of imaging applications in the presence of Poisson statistics, is that if the algorithm converges, it will converge to the maximum likelihood estimate (MLE).1315 In Eq. (7), (x,y) are the coordinates for the remote scene, (u,v) are the coordinates in the detector plane, o^ is an estimate for o, and I is the expected value of the intensity received at each detector pixel given a specific PSF. However, a commonly cited drawback to this technique is the noise amplification that occurs as the number of iterations increases.9,16 If allowed to iterate indefinitely, the estimate for o would be only noise. Therefore, we must stop iterations before noise amplification occurs. Clearly, this could be accomplished interactively by the user; however, for an automated or blind routine, any required user interaction would be undesirable. The method proposed in this work will rely on the convergence of the estimate of the noise power and the predicted variance of the collected data to cease iterations. Convergence of variance as a criteria to cease iterations has been used previously with success;12,17 however, the novelty in this work is that we will also use the convergence of variance to identify the best model for the PSF parameterized by r0.

3.2.

Blind Estimate of Seeing Via MAP Technique

The idea of using the RL algorithm in a blind fashion to deblur an image was previously presented by Fish et al.4 However, their work did not employ the theoretical models for the long- and short-exposure transfer functions parameterized by r0.3 Perhaps the most similar work to this research was accomplished by MacDonald. He developed a blind technique that was iterative in nature like the RL algorithm, yet he introduced a priori information for images distorted by speckle noise following more of a negative binomial distribution.12 MacDonald introduced a priori information for the distribution of r0 in hopes of maximizing the likelihood at the appropriate level of seeing.

If we assume independence of the measurements for every pixel in the detector array, we can state the joint probability of the observed noisy image, i, as

Eq. (9)

p[I=i(u,v);(u,v)]=uvI(u,v|r0)i(u,v)eI(u,v|r0)i(u,v)!.

Ideally, we would like to maximize the likelihood or log likelihood

Eq. (10)

L(r0)=uv[i(u,v)ln[I(u,v|r0)]I(u,v|r0)i(u,v)!],
over a range for r0 to identify the appropriate PSF. Unfortunately, likelihood continually increases with r0 as illustrated by the following example in Fig. 2. When the RL algorithm maximizes likelihood for a given PSF, the maximum likelihood (ML) solution for r0 does not necessarily occur when the correct PSF parameterized by r0 is chosen. This is the direct problem that MacDonald sought to solve by introducing a priori information for the distribution of r0.

Fig. 2

(a) The original image without the added effects of atmospheric blurring. (b) The image that would be received by a sensor with the specifications listed in Table 1 given an r0 of 2.5 cm. (c) Demonstration that shows likelihood is not maximized for the correct value of r0.

JARS_7_1_073504_f002.png

MacDonald hypothesized that a distribution could be applied to the value for r0 based on the intuitive observation that the seeing is seldom extremely better than the average and can often be worse. The form of the probability density function for the random parameter r0 was assumed to be

Eq. (11)

fR0(r0)=[er0ravgravg]N2,
where ravg is the average atmospheric seeing and N2 is the number of pixels in the detector array. When applying this technique to the example illustrated in Fig. 2, we see that likelihood is maximized near the correct value of r0 as shown in Fig. 3. Although the technique is successful in this scenario, two mathematical challenges remain. First, the choice of an exponential distribution for r0 is probably inaccurate. Extremely low values of r0 are expected to be nearly as unlikely as high values. Second, the effect of scaling the exponential density function by the number of pixels in the detector array is difficult to justify for partially illuminated scenes such as astronomical images. The convergence of variance technique will remove the requirement for the prior distribution on r0 and allow us to directly converge to the correct value.

Fig. 3

(a) Log likelihood of the exponential prior as a function of r0. (b) Overall log likelihood with the addition of the prior. With the addition of the prior, likelihood is maximized for the correct value of r0.

JARS_7_1_073504_f003.png

3.3.

Blind Estimate of Seeing Via Convergence of Variance Technique

The convergence of variance technique works on the premise of searching for the best possible PSF parameterized by r0 in the amount of time available for processing. Given more time, the technique will provide a more refined estimate for r0. We will first explain this technique in more detail using the assumption that the images collected follow Poisson statistics. Later, we will demonstrate the technique using images that follow a negative binomial noise model. Ultimately, the technique should work regardless of the noise distribution assuming the correct iterative deblurring algorithm and convergence criteria are used.

This technique relies on a comparison of variance between the image estimate, o^, and the collected image, i. At this point, we can assume that any further iterations would only serve to fit the estimates to the noise. In other words, iterations would cease when

Eq. (12)

uv[i(u,v)I(u,v)]2<uvV(u,v),
where V is the actual image variance. Assuming the collected MCFA follows Poisson statistics, the deblurring algorithm employed would be the Richardson–Lucy algorithm in Eq. (7), and

Eq. (13)

V(u,v)=i(u,v).

The relationship in Eq. (13) is allowed since the assumption can be made that the intensity captured by each detector is independent of other detectors, and each intensity can essentially be thought of as an independent Poisson random variable. The summation of these random variables is, therefore, a good approximation for the total image variance.

Due to the photon counting nature of many imaging applications, the Poisson distribution is often employed as a statistical model for the detected images. However, due to the highly coherent nature of laser light, images detected by a LADAR sensor often follow more of negative binomial distribution. Fortunately, the robustness of the convergence of variance technique allows for employment in this scenario as well. MacDonald derived an iterative MLE where the noise is dominated by laser speckle as

Eq. (14)

o^(x,y)new=o^(x,y)olduv[i(u,v)I(u,v|r0)h(ux,vy|r0)]uv[i(u,v)+MM+I(u,v|r0)h(ux,vy|r0)],
where M is the coherence parameter of the light.12 Using the deblurring algorithm in Eq. (14), we would again iterate until the relationship in Eq. (12) is satisfied where

Eq. (15)

V(u,v)=i(u,v)[1+i(u,v)M].

Equation (15) represents the variance for a negative binomial random variable, which should well approximate the variance in laser illuminated imagery.

The diagram in Fig. 4 demonstrates how this technique could be employed in an operational scenario where processing time and computational power are limited. In this scenario, images are collected and fed into the r0 estimation process. At any point in time, the best possible estimate for r0 can be drawn upon for deblurring an image. However, in parallel, the outer loop will continue to work on characterizing the current atmospheric seeing conditions. One of the key advantages to this algorithm is that it is easy to parallelize. Even with a common home computer that has a multicore processor, it is easy to simultaneously test multiple values of r0 for convergence. The r0 to employ in the deblurring algorithm would be the lowest value of r0 for which Eq. (12) is satisfied. Additionally, we can further enhance the process by first conducting a coarse estimate of r0 followed by a more refined estimate as indicated in Fig. 5.

Fig. 4

Potential employment scenario for the convergence of variance technique, where the outer loop is allowed to continually execute on recently collected images. The most recent estimate for r0 can then be fed to an iterative deblurring algorithm to provide rapid results to the user.

JARS_7_1_073504_f004.png

Fig. 5

In this demonstration, the true value for r0 is 4.3 cm. However, the algorithm first converges at 5 cm using a 1cm/step coarse search. It then accomplishes a 0.1cm/step fine search to converge to the true value at 4.3 cm.

JARS_7_1_073504_f005.png

MacDonald references the employment of a convergence test for ceasing iterations in his algorithm; however, it is apparent that he did not utilize this test as a sufficient criteria for identifying the correct value of atmospheric seeing. As previously mentioned, if allowed sufficient time the relationship in Eq. (12) will be satisfied for the correct r0 and all higher values. However, the criteria will never be attained for low estimates of r0. The following sections will demonstrate the employment of this technique on images with Poisson and negative binomial noise to show that a priori information is not required to achieve accurate estimates for r0.

4.

Simulation Results

The following results will demonstrate the utility of the convergence of variance technique and compare the results to the algorithm developed by MacDonald for images with Poisson and negative binomial noise. The optical specifications listed in Table 2 and used for the simulations were not limited to what could readily be obtained for experimentation. Rather, the specifications were chosen to mimic what could potentially be incorporated into a targeting pod design based on size limitations. The specifications will allow for properly sampled images according to Eq. (5). We will first consider simulation results using a fully illuminated scene. We will then simulate conditions for astrophotography where the scenes are only partially illuminated.

Table 2

Simulated system specifications.

Parameter nameDefined value
Mean wavelength (λ¯)600 nm
Pixel size5×5μm2
Sensor focal length (f)3 m
Aperture diameter (D)15 cm
Coherence Parameter (M)30

4.1.

Fully Illuminated Scenes

In this section, we will present results from images that are fully illuminated. By fully illuminated, we mean that the overwhelming majority of the scene is not dark and contains varying levels of contrast. Recall that MacDonald’s algorithm defined a prior for r0 of exponential form scaled by the total number of pixels in the detector as shown in Eq. (11).

In the following examples, the cameraman photo built into Matlab® is blurred using a total OTF that is the product of the diffraction-limited OTF and an average short-exposure OTF with various levels of r0. Multiple trials will be conducted with MCFA images composed of 1, 10, 20, 30, 50 and 100 individual frames with independent realizations of Poisson noise to demonstrate the effects of SNR on each algorithm. The original, blurred and recovered images are shown in Fig. 6. In order to implement MacDonald’s algorithm, we either need an initial estimate on the average value for atmospheric seeing, ravg, or we can initialize it to the aperture diameter if no estimate can be made. For purposes of fair comparison, we will assume that no prior estimates are known for atmospheric seeing, and ravg will be initialized to the aperture diameter.

Fig. 6

In this demonstration, the true image (a) is blurred with an average short-exposure OTF with an r0 of 2.6 cm. The blurred/noisy image (b) is the summation of 30 individual frames with independent realizations of Poisson noise. The image estimate (c) was obtained using the best estimate of r0=2.6cm with the cap on the number of iterations set to 5000.

JARS_7_1_073504_f006.png

Table 3 shows that the convergence of variance technique and MacDonald’s algorithm produce nearly identical results with the only exceptions highlighted in bold. MCFA images consisting of more frames take longer to converge due to the higher intensities at each pixel associated with the summation of individual frames. When the algorithm does not always converge to the true value of r0, the estimated value was always within 0.5 cm of the true value. The estimates for r0 often appear to be lower than the true value, and this is likely due to the algorithm’s attempt to remove minor focus errors that were not accounted for when assigning the image as truth data. The PSF arising from minor focus error will have a similar shape to the PSF arising from atmospheric turbulence.18 Therefore, minor focus error could contribute to the low estimates for r0. This assessment is drawn from and supported by the fact that ideally simulated scenes using patterns and point sources as demonstrated in Sec. 4.2 never converge below the true value regardless of the number of iterations the algorithm is allowed to perform provided we have an adequate SNR.

Table 3

Results for fully illuminated image simulation with Poisson noise (true r0=2.6  cm).

FramesSNR (dB)MacDonald’s algorithm (cm)Convergence of variance (cm)
ResultErrorResultError
Maximum iterations allowed—1000
141.482.40.22.40.2
1051.472.70.12.70.1
2054.492.80.22.80.2
3056.242.90.32.90.3
5058.443.00.43.00.4
10061.473.20.63.10.5
Maximum iterations allowed—5000
141.482.1-0.52.20.4
1051.472.40.22.40.2
2054.492.50.12.50.1
3056.242.50.12.60
5058.442.602.60
10061.472.70.12.70.1
Maximum iterations allowed—10000
141.482.10.52.10.5
1051.472.30.32.30.3
2054.492.40.22.40.2
3056.242.40.22.40.2
5058.442.50.12.50.1
10061.472.602.60
Maximum iterations allowed—20,000
141.482.10.52.10.5
1051.472.30.32.30.3
2054.492.30.32.30.3
3056.242.30.32.40.2
5058.442.40.22.40.2
10061.472.50.1

The performance of the convergence of variance technique is based on the quality of the blurred image and the amount of time available for processing. Provided enough time is allowed, Eq. (12) will be satisfied for the best estimate of r0. However, allowing too much time does not present a problem for this algorithm. By observing the difference between the left-side and right-side of Eq. (12) at each iteration, we can update an estimate for the number of remaining iterations required for convergence, EI, using

Eq. (16)

IV(n)=u,v=1[i(u,v)I(u,v)]2BV=u,v=1V(u,v)EI=IV(n1)BVIV(n1)IV(n),
where n represents the iteration number, BV is the variance of the collected image, and IV is the mean square error between the collected images and the non-noisy estimate. Essentially the relationships in Eq. (16) are used to predict how long the algorithm will have to iterate based on the current rate of convergence. Figure 7 demonstrates that when the value of r0 is too low, the estimated number of remaining iterations diverges.

Fig. 7

Estimated iterations remaining for an motion compensated frame average (MCFA) image composed of 30 independent frames. (a) Coarse estimation shows convergence for r0 values >3cm, but divergence for values of 2 cm or less when the true r0=2.6cm. (b) Fine estimation with a cap of 5000 iterations shows convergence for r0 values of 2.6 cm or greater. Based on experience with this algorithm, it is expected that convergence will occur for an r0 value of 2.4 cm due to the concave down nature of the curve as supported by the results in Table 3.

JARS_7_1_073504_f007.png

We now repeat this experiment in the presence of negative binomial noise to simulate the expected results from laser illuminated imagery. Figure 8 again demonstrates the results using an MCFA consisting of 30 frames and a cap on the maximum iterations set to 5000. A close inspection of the blurred/noisy image reveals a significant and visible increase in overall noise variance. As expected, this does impact the final results. Table 4 summarizes the results obtained for the convergence of variance technique, as well as MacDonald’s algorithm using the introduction of a prior for the distribution of r0. However, in the case of negative binomial noise, the differences in the results are more significant. By introducing the prior, the tendency to underestimate r0 is more pronounced. This trend of underestimation of r0 was also noticed by MacDonald.19 Again, it is expected that focus error in the original image is a contributing factor in the underestimation of r0. This presents an interesting topic for potential future research.

Fig. 8

In this demonstration, the true image (a) is blurred with an average short-exposure OTF with an r0 of 2.6 cm. The blurred/noisy image (b) is the summation of 30 individual frames with independent realizations of negative binomial noise. The image estimate (c) was obtained using the best estimate of r0=2.6cm with the cap on the number of iterations set to 5000 for the convergence of variance technique.

JARS_7_1_073504_f008.png

Table 4

Results for fully illuminated image simulation with negative binomial noise (true r0=2.6  cm).

FramesSNR (dB)MacDonald’s algorithm (cm)Convergence of variance (cm)
ResultErrorResultError
Maximum iterations allowed—1000
114.70.91.71.51.1
1024.72.10.52.50.1
2027.72.40.22.80.2
3029.42.603.00.4
5031.63.00.43.10.5
10034.73.20.63.20.6
Maximum iterations allowed—5000
114.70.91.71.41.2
1024.71.80.82.00.6
2027.72.00.62.20.4
3029.42.20.42.40.2
5031.62.40.22.40.2
10034.72.50.12.50.1
Maximum iterations allowed—10,000
114.70.91.71.31.3
1024.71.80.81.90.7
2027.72.00.62.10.5
3029.42.10.52.20.4
5031.62.20.42.30.3
10034.72.30.32.40.2
Maximum iterations allowed—20,000
114.70.81.81.31.3
1024.71.80.81.90.7
2027.71.90.72.00.6
3029.42.00.62.00.6
5031.62.10.52.20.4
10034.72.20.42.20.4

At this point, the functionality of the convergence of variance technique has been demonstrated for fully illuminated scenes. Further experimentation was conducted on simulated stellar targets to identify the potential for measurement of atmospheric seeing on scenes where the majority of the image consists only of background illumination and noise. Since these targets are fully simulated, and thus inherently perfectly focused, underestimation of r0 was not expected to be a problem for the convergence of variance technique. Provided the algorithm is allowed enough time to iterate, convergence to the correct r0 should be achieved. However, the scaling factor of N2 in the prior Eq. (11) was expected to still cause some bias in the estimates for r0 using MacDonald’s algorithm.

4.2.

Partially Illuminated Scenes

Space situational awareness (SSA) is a key mission of the United States Air Force Space Command. One aspect of SSA involves using both telescope networks and radars to detect, identify, record, and track all man-made objects orbiting the earth. Knowing the exact locations of these orbiting objects in space is crucial for future space operation safety. The SSA mission has become even more important with recent events such as the Iridium and Cosmos satellite collision and the Chinese antisatellite missile test in 2007, both of which created large swaths of space debris. This debris will be an ongoing risk to US satellites for years to come as the orbits of the debris degrade toward Earth. This is merely a single example and justification of the importance for deblurring techniques applicable to astronomical images.

The following simulations will consider three separate target configurations. We will look at a single point source that could be representative of a star, a scene that has multiple point sources arranged throughout the image and finally, we will look at a cross bar pattern, as in Fig. 9. We will again consider both Poisson and negative binomial statistics with various SNR levels.

Fig. 9

(a) This scene is representative of a single star or point source. (b) This scene contains multiple point sources with varying intensities and spacings. The spacing between the two point sources in the center of the image is a single pixel. (c) This scene contains a cross bar pattern.

JARS_7_1_073504_f009.png

The testing in Sec. 4.1 revealed that both algorithms are impacted by SNR and focus error. However, the simulations also revealed that no gain in performance was realized through the introduction of the prior for r0 and that the convergence of variance technique exhibited promise for estimation of r0 as well as a deblurred scene. The results that follow add support for the hypothesis that the underestimation of r0 was a function of both SNR and focus error. The SNR for the various MCFAs used in the following simulations is identified in Table 5. As expected, the SNR for MCFAs with Poisson noise is higher than the SNR for MCFAs with negative binomial noise.

Table 5

Signal to noise ratio for simulation data.

Frames in MCFAPoint sourceMultiple point sourcesCross bar pattern
Signal to Noise Ratio (dB) for Poisson MCFAs
115.017.230.0
1024.826.239.3
2028.029.242.9
3029.630.744.7
5031.933.346.9
Signal to noise ratio (dB) for negative binomial MCFAs
111.813.114.4
1022.623.725.7
2026.327.029.3
3027.528.130.5
5029.430.032.7

Tables 6 and 8 demonstrate the utility of the convergence of variance algorithm on partially illuminated scenes with Poisson and negative binomial noise, respectively. From these results, we conclude that, provided enough frames are properly registered and averaged to provide adequate SNR and enough time is allowed for convergence, the value of r0 can be estimated to within 1 mm. In cases where we have low SNR, the algorithm will tend to underestimate, but this is expected since the noise power in the images masks some of the high frequency content. If insufficient time is allowed for convergence, the algorithm will produce a high estimate for r0, as can be observed when the cap for iterations was limited to 1,000. Additionally, we notice that in cases of adequate SNR, we do not have a problem of underestimation of r0 since focus error was not present in these images.

Table 6

Results for partially illuminated image simulation with Poisson noise (true r0=3.3  cm).

FramesMacDonald’s algorithm (cm)Convergence of variance (cm)
PointMultipointCross barPointMultipointCross bar
Maximum iterations allowed—1000
10.11.52.63.23.23.4
102.93.13.33.33.54.2
203.23.23.43.33.64.6
303.33.33.53.33.85.1
503.33.33.63.43.95.9
Maximum iterations allowed—10,000
10.11.32.63.23.23.2
102.93.03.23.33.33.3
203.13.23.33.33.33.3
303.23.23.33.33.33.3
503.23.23.33.33.33.4
Maximum iterations allowed—20,000
10.11.32.63.23.23.2
102.93.03.23.33.33.3
203.13.13.33.33.33.3
303.13.23.33.33.33.3
503.23.23.33.33.33.3
Maximum iterations allowed—1,000,000
10.11.32.53.23.23.2
102.83.03.23.33.33.3
203.13.13.33.33.33.3
303.13.23.33.33.33.3
503.23.23.33.33.33.3

For demonstration purposes, the algorithm was allowed a cap of 1,000,000 iterations. Even under these conditions, underestimation was never a factor for images with adequate SNR using the convergence of variance technique. At this point, it is unknown if it would be possible to predict the precise level of SNR at which the algorithm will converge to the correct value of r0 since the relationship appears to be scene or contrast dependent. However, we can conclude that higher levels for SNR will yield better results. Additionally, even at low values of SNR, we achieve reasonable estimations for the deblurred scene and r0 using the convergence of variance method. On the other hand, by introducing a prior for the distribution of r0, underestimation is more prevalent.

In Fig. 10, we demonstrate the performance of the convergence of variance technique for the three target types in the presence of Poisson noise. Using the RL deconvolution algorithm with a cap on the number of iterations set to 10,000 and an MCFA consisting of 30 frames, we were able to recover the correct r0 in all cases. For the point source targets, the blurring effects of the simulated atmosphere reduce the intensity to the point that it is difficult to visually identify the various point sources. However, when deconvolution is completed, each of the sources is easily identified. For the cross bar pattern, the process of deconvolution makes it much easier to identify the structure of the target pattern. Although it may seem that a cap of 10,000 iterations is unreasonable, the algorithm will only take as much time as needed to converge within this upper bound. For instance, if the termination criteria is met before the upper bound on iterations is achieved, the algorithm will terminate. Even with a standard home computer with a 2.7 GHz Intel® Core™ i5 processor and 16 GB of memory, convergence was achieved in a reasonably short period of time for all target types as shown in Table 7. Again, as indicated in Fig. 5, the algorithm could be manually interrupted sooner if needed, at which point, the lowest value of r0 that has allowed convergence would be returned as the answer. For instance, in the example shown in Fig. 10, if we interrupted the routine after 4.26 s for the cross bar pattern, we would get an estimated r0 of 4 cm. After a total of 10 s, the estimate would be within 0.5 cm of the truth at 3.8 cm. The estimate continues to be refined until the best estimate is achieved after 21.77 s. As demonstrated in Table 6, convergence at a value less than the truth is not an issue provided we have adequate SNR, and the image is not further degraded by optical aberrations such as focus error.

Fig. 10

Demonstration of the convergence of variance technique using the Richardson–Lucy (RL) deconvolution algorithm with an MCFA consisting of 30 frames for the single point source (a), multiple point sources (b), and the cross bar pattern (c).

JARS_7_1_073504_f010.png

Table 7

Convergence times for simulations shown in Fig. 10.

Target typeCourse convergenceFine convergenceTotal time (s)
IterationsTime (s)IterationsTime (s)
Single point6271.669682.494.15
Multiple points8001.9017094.456.35
Cross bar21234.26827517.5121.77

Table 8

Results for partially illuminated image simulation with negative binomial noise (true r0=3.3  cm).

FramesMacDonald’s algorithm (cm)Convergence of variance (cm)
PointMultipointCross barPointMultipointCross bar
Maximum iterations allowed—1000
10.11.01.83.13.05.0
102.42.72.83.43.75.1
202.93.03.03.54.05.8
303.03.13.13.74.86.5
503.23.23.23.95.08.5
Maximum iterations allowed—10,000
10.10.51.73.12.93.4
102.32.62.73.33.23.4
202.72.93.03.33.33.5
302.93.03.13.33.33.7
503.03.13.33.33.34.0
Maximum iterations allowed—20,000
10.10.41.73.12.93.2
102.32.62.73.23.23.2
202.72.93.13.23.33.3
302.93.03.13.33.33.3
503.03.13.23.33.33.3
Maximum iterations allowed—1,000,000
10.10.41.73.12.93.2
102.32.52.73.23.23.2
202.72.83.03.23.33.3
302.83.03.13.33.33.3
503.03.13.13.33.33.3

4.3.

Extension of Convergence of Variance Technique for 3-D FLASH Laser Radar Images

3-D LADAR technology is receiving an increased interest as the technology improves. Currently, the commercially available sensors are severely undersampled and do not experience the effects of diffraction from atmospheric turbulence. However, as the technology continues to progress, it is expected that minimizing the effects of atmospheric turbulence will be important. Conversely, certain applications such as the imaging and tracking of space debris may require an optics configuration where the current sensors would receive properly sampled images. In those cases, the convergence of variance technique could easily be applied to identify the PSF parameterized by r0 that will deblur the scene.

A typical full-waveform 3-D LADAR image is comprised of multiple 2-D images or frames separated by a small time delta. Therefore, the 3-D image can be flattened into a 2-D image by simply removing the range information and accumulating the intensity information for each pixel for the series of individual frames. Fortunately, the atmosphere can be considered static for the laser illuminator pulse duration and subsequent detector integration times that are common to 3-D LADAR sensors.1,3 Based on this premise, this technique was originally explored as a means of identifying the best PSF parameterized by r0 to be employed in algorithms such as the multiple surface FLASH LADAR ranging algorithm.20

5.

Experimental Results

The optical configuration shown in Fig. 11 was used to obtain properly sampled images. The specifications for this setup are listed in Table 1. The camera used in this configuration allowed for a 16-bit analog to digital conversion, and experiments show that it acts as a photon counting device in lower intensity regions. This allows for utilization of the Poisson and negative binomial statistical models without applying a conversion factor to the digitized images. However, as the detector approached higher intensity thresholds, the conversion between digital counts and photons became nonlinear. For that reason, images were taken in low light conditions to ensure the convergence of variance technique could be applied.

Fig. 11

Experimental sensor setup consists of a Celestron® NexStar® 6SE 1.5 m focal length telescope with a mask to reduce the aperture to 5 cm, and an Orion® Starshoot™ G3 monochrome camera.

JARS_7_1_073504_f011.png

In order to create a turbulent atmosphere to image through, a heat source was directed in front of the telescope aperture. This technique allowed for the generation of repeatable turbulent atmospheres with r0 values in the subcentimeter range. Without the use of this heat source, all of the images would likely have been at or near the diffraction limit making validation of the convergence of variance technique difficult.

The experiments for fully illuminated scenes will use the image in Fig. 12 as a target. This target will be placed indoors, 10 m from the sensor where turbulence and illumination can be controlled. The incorporation of a step in the bottom portion of the scene will permit the measurement of the true r0 for comparison with the estimated values.

Fig. 12

Scene used for each of the fully illuminated experiments below. The top portion of the scene includes various characters of decreasing size. The bottom portion of the scene has a step target to allow for measurement of the true r0.

JARS_7_1_073504_f012.png

5.1.

Fully Illuminated Scenes—Natural Light

In the following two experiments, the remote scene was fully illuminated by natural lighting. Given that the light source was incoherent, the Poisson statistical model for the photon arrival rate applies.3 We will use the RL deconvolution algorithm (7) with PSFs parameterized by a range of r0 values from 0.1 cm up to the aperture diameter of 5 cm.

In Fig. 13, we demonstrate the ability to measure r0 by measuring the step response from the collected MCFA. The impulse response is then found by taking the derivative of this measured step response. Once we have the impulse response, we can vary r0 per the relationship in Eq. (2) to find the theoretical total PSF, htot, that minimizes the error between the measured impulse response and the theoretical impulse response. In the first experiment shown in Fig. 14, r0 was measured at 0.4 cm. Using the convergence of variance technique, we obtain an estimate of r0=0.5cm for an error in estimation of only 1 mm. At this point, it is important to note that the edges of the deblurred images are distorted due to the implementation of the RL algorithm using the 2-D fast Fourier transform in Matlab. This implementation was chosen in order to decrease the time required for execution. Therefore, when computing the variance per the relationship in Eq. (12), the edges were ignored. In the collected MCFA, it is difficult to discern the smaller font sizes. However, when deconvolution is conducted with an OTF parameterized by the r0 estimate, it is possible to identify each of the characters. In the second experiment shown in Fig. 15, r0 was measured at 1.1 cm. Using the convergence of variance technique, we obtain an estimate of r0=1.1cm. Although the blurring due to turbulence was less severe in this experiment, improvement in sharpness of the characters is again noted when deconvolution was conducted using the estimate for r0.

Fig. 13

Using the step in the bottom portion of the colected remote scene (a) we can compute the mean step response for the collected MCFA (b). From this step response, we compute the experimental OTF and find the theoretical short exposure OTF that minimizes mean square error between the two (c).

JARS_7_1_073504_f013.png

Fig. 14

(a) Collected MCFA consisting of 100 registered frames, each with an exposure time of 0.001 s. (b) Deblurred image using the lowest r0 for which convergence was achieved (r0=0.5cm).

JARS_7_1_073504_f014.png

Fig. 15

(a) Collected MCFA consisting of 100 registered frames, each with an exposure time of 0.001 s. (b) Deblurred image using the lowest r0 for which convergence was achieved (r0=1.1cm).

JARS_7_1_073504_f015.png

5.2.

Fully Illuminated Scenes—Laser Illumination

In the following two experiments, the remote scene was illuminated using a laser with a wavelength of 630 nm. Given that the light source was partially coherent, with a measured coherence parameter, M=10, the negative binomial statistical model for speckle noise applies.3 We will use the ML estimator developed by MacDonald (14) with PSFs parameterized by a range of r0 values from 0.1 cm up to the aperture diameter of 5 cm.

In the first experiment shown in Fig. 16, r0 was measured at 0.5 cm. Using the convergence of variance technique, we obtain an estimate of r0=0.5cm. In the collected MCFA, it is difficult to discern the smaller font sizes and based on where the illumination spot was centered, the top two rows of text are nearly illegible. However, when deconvolution is conducted with an OTF parameterized by the estimate for r0, it is possible to identify most of the characters. It is much easier to identify the row of Qs, and the top row of Es is faintly visible. In the second experiment shown in Fig. 17, r0 was measured at 1.1 cm. Using the convergence of variance technique, we obtain an estimate of r0=1.1cm. When the blurring due to turbulence was less severe in this experiment, significant improvement is again noted when deconvolution was conducted using the estimate for r0. The results for the fully illuminated image experiment are summarized in Table 9.

Fig. 16

(a) Collected MCFA consisting of 100 registered frames, each with an exposure time of 0.005 s. (b) Deblurred image using the lowest r0 for which convergence was achieved (r0=0.5cm).

JARS_7_1_073504_f016.png

Fig. 17

(a) Collected MCFA consisting of 100 registered frames, each with an exposure time of 0.005 s. (b) Deblurred image using the lowest r0 for which convergence was achieved (r0=1.1cm).

JARS_7_1_073504_f017.png

5.3.

Correlation Technique for Measurement of Atmospheric Seeing

With the previous experimental setups, the true value of r0 could be measured by imaging a step target and by taking the derivative to find the impulse response as shown in Fig. 13. The step target could be placed in line with the desired remote scene such that the measurements were made through nearly identical columns of turbulent air. However, when trying to measure the true r0 of stellar images for comparison with the experimental results, this was not a viable solution. We could average many short exposure images of a single star in order to get the average short-exposure OTF parameterized by r0; however, this requires precise tilt removal and any shift estimation errors will appear as attenuation of the short-exposure OTF and underestimation of r0. Trying to average enough frames to achieve the long-exposure OTF in order to find r0 would likely take thousands of images to converge upon the optimal solution. Based on the frame rate of the experimental collection system, it is possible that the value for r0 would change in the time required to gather this amount of data. Therefore, we considered an alternative technique for measuring the value of atmospheric seeing that considers the cross correlation of the collected images.

By considering all possible combinations of the cross correlations between a series of individual short exposure images, SK, taken of a star, we can find the cross power spectral density (CPSD), PS(νx,νy), where

Eq. (17)

PS(νx,νy)=E{F[SK1(x,y)]F*[SK2(x,y)]}K1K2.

In other words, the CPSD is the correlation between the normalized Fourier transforms of all possible image combinations.21 For a sequence of K images, there are a total of K(K1)/2 nonredundant cross correlations. Additionally, the CPSD of the blurred point source can be shown to have the following relationship:

Eq. (18)

PS(νx,νy)=E[|H(νx,νy)|2],
where E[|H(νx,νy)|2] is the speckle transfer function.22 Fortunately, the speckle transfer function can also be parameterized by r0. Therefore, in order to obtain the true value for atmospheric seeing, we can find the value of r0 that minimizes the error in Eq. (18). At this point, we must keep in mind that this technique for measuring r0 is limited to cases where we are imaging a point source. We will use this technique to demonstrate that the convergence of variance technique will in fact identify the correct r0.

5.4.

Partially Illuminated Scenes

For the experiments involving partially illuminated scenes, we chose to image a star. Stars can essentially be considered point sources of light, allowing us to use the technique presented in Sec. 5.3 to obtain truth data for comparison with the convergence of variance results. Although the resultant deblurred image is intuitive, the estimated values for r0 prove that the technique can be successfully used to measure r0 for partially illuminated scenes. For the experiments, we chose a relatively bright star near Polaris to image. This minimized the relative motion between the field of view and the imaged portion of the sky. All individual images were taken using an exposure time of 0.001 s to ensure that the short-exposure model was applicable.3 Additionally, each of the MCFAs is a compilation of 20 individual frames. Some experimentation with MCFAs consisting of more than 20 frames was accomplished. However, no increase in performance was observed.

In Table 10, the estimated value for r0 was always within 0.2 cm of the measured value. Initially, the cap on the number of iterations was set to 1000. In all cases, the algorithm converged within the first 100 iterations for this scene. We then increased the number of iterations to 10,000 and then 20,000 to see if it had an impact on the results, but it did not.

Table 9

Summary of results for fully illuminated image experiments.

TrialEstimated r0 (cm)Measured r0 (cm)
Natural light—low r00.50.4
Natural light—high r01.11.1
Laser illumination—low r00.50.5
Laser illumination—high r01.11.1

Table 10

Results for partially illuminated image experiments.

TrialEstimated r0 (cm)Measured r0 (cm)
12.32.1
22.42.2
32.32.2
42.42.1
52.42.1

As described in Fig. 5, we first conduct a course search with a step size of 1 cm, revealing that the minimum value of r0 that will allow convergence is 3 cm. We then step through with a smaller step size and show that convergence can be achieved for values of r0 as low as 2.4 cm. For this image, convergence occurs after just 68 iterations for an r0 of 2.4 cm. However, it is divergent for anything less than this value. In Fig. 18, we show the resultant deblurred image when using this best estimate for r0 in conjunction with the RL deconvolution algorithm and the average short-exposure OTF. As expected, the image estimate approaches more of a point source than the original image.

Fig. 18

The collected MCFA consisting of 20 registered short exposure images (a) and the deblurred image estimate when the average short exposure OTF with an r0 of 2.4 cm is used for deconvolution (b).

JARS_7_1_073504_f018.png

6.

Conclusions

The focus of this work was to develop a blind deconvolution technique that could be employed in a tactical military environment where both time and computational power are limited. The convergence of variance technique detailed above allows for rapid and accurate estimations of the atmospheric OTF parameterized by the seeing parameter, r0. As shown in Fig. 4, the technique can be interrupted after any amount of time, at which point the best available results would be provided. If more time is provided, the results are simply enhanced. Additionally, the convergence of variance technique reduces the possibility of noise amplification common with iterative deconvolution algorithms by ceasing iteration once the statistically predicted variance is achieved.

An interesting discovery was also made through the course of this effort and is highlighted in Sec. 4.1. This technique will also attempt to recover from a minor focus error in the collected images due to the similarity between the atmospheric OTF and the OTF that arises from a minor focus error. As a result, the estimates for r0 may be lower than the true value. Therefore, if this algorithm is to be used for atmospheric seeing measurement, the images must be in focus. A potential topic for future research would be the relationship between the seeing parameter and focus interaction for varying levels of focus error and atmospheric turbulence.

Acknowledgments

The authors would like to acknowledge the Air Force Office of Scientific Research (AFOSR) for their continued funding of research within this discipline. The views expressed in this paper are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the United States government.

References

1. 

R. RichmondS. Cain, Direct-Detection LADAR Systems, SPIE, Bellingham, Washington (2010). Google Scholar

2. 

A. MacDonaldS. CainM. Oxley, “Binary weighted averaging of an ensemble of coherently collected image frames,” IEEE Trans. Image Process., 16 (4), 1085 –1100 (2007). http://dx.doi.org/10.1109/TIP.2007.891774 IIPRE4 1057-7149 Google Scholar

3. 

J. Goodman, Statistical Optics, Wiley Classics Library, John Wiley and Sons, New York (2000). Google Scholar

4. 

D. FishA. BrinicombeE. Pike, “Blind deconvolution by means of the Richardson-Lucy algorithm,” J. Opt. Soc. Am., 12 (1), 58 –65 (1995). JOSAAH 0030-3941 Google Scholar

5. 

T. SchulzB. StriblingJ. Miller, “Multiframe blind deconvolution with real data:imagery of the Hubble space telescope,” Opt. Express, 1 (11), 355 –362 (1997). http://dx.doi.org/10.1364/OE.1.000355 OPEXFF 1094-4087 Google Scholar

6. 

T. Schulz, “Multiframe blind deconvolution of astronomical images,” J. Opt. Soc. Am. A, 10 (5), 1064 –1073 (1993). http://dx.doi.org/10.1364/JOSAA.10.001064 JOAOD6 0740-3232 Google Scholar

7. 

A. CarassoD. BrightA. Vladar, “Apex method and real-time blind deconvolution of scanning electron microscope imagery,” Opt. Eng., 41 (10), 2499 –2514 (2002). http://dx.doi.org/10.1117/1.1499970 OPEGAR 0091-3286 Google Scholar

8. 

A. Carasso, “Apex blind deconvolution of color hubble space telescope imagery and other astronomical data,” Opt. Eng., 45 (10), 107004 (2006). http://dx.doi.org/10.1117/1.2362579 OPEGAR 0091-3286 Google Scholar

9. 

L. Carvalhoet al., “Comparison of image restoration methods applied to inland aquatic systems images aquired by HR CBERS 2B sensor,” in Geoscience and Remote Sensing Symposium (IGARSS) 2010 IEEE International, 523 –526 (2010). Google Scholar

10. 

T. ChanC. Wong, “Total variation blind deconvolution,” IEEE Trans. Image Process., 7 (3), 370 –375 (1998). http://dx.doi.org/10.1109/83.661187 IIPRE4 1057-7149 Google Scholar

11. 

E. LamJ. Goodman, “Iterative statistical approach to blind image deconvolution,” J. Opt. Soc. Am. A, 17 (7), 1177 –1184 (2000). http://dx.doi.org/10.1364/JOSAA.17.001177 JOAOD6 0740-3232 Google Scholar

12. 

A. MacDonaldS. CainE. Armstrong, “Image restoration techniques for partially coherent 2-D LADAR imaging systems,” Proc. SPIE, 5562 10 –18 (2004). http://dx.doi.org/10.1117/12.560278 PSISDG 0277-786X Google Scholar

13. 

W. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am., 62 (1), 55 –59 (1972). http://dx.doi.org/10.1364/JOSA.62.000055 JOSAAH 0030-3941 Google Scholar

14. 

L. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J., 79 (6), 745 (1974). http://dx.doi.org/10.1086/111605 ANJOAA 0004-6256 Google Scholar

15. 

L. SheppY. Vardi, “Maximum likelihood reconstruction for emission tomography,” IEEE Trans. Med. Imaging, 1 (2), 113 –122 (1982). http://dx.doi.org/10.1109/TMI.1982.4307558 ITMID4 0278-0062 Google Scholar

16. 

N. Deyet al., “A deconvolution method for confocal microscopy with total variation regularization,” in IEEE Int. Symp. Biomedical Imaging: Nano to Macro, 1223 –1226 (2004). Google Scholar

17. 

J. McMahonR. MartinS. Cain, “Three-dimensional FLASH laser radar range estimation via blind deconvolution,” J. Appl. Rem. Sens., 4 (1), 043517 (2010). http://dx.doi.org/10.1117/1.3386044 1931-3195 Google Scholar

18. 

J. Goodman, Fourier Optics, 3rd ed.Roberts and Company, Greenwood Village, CO (2005). Google Scholar

19. 

A. MacDonald, Blind Deconvolution of Anisoplantic Images Collected By A Partially Coherent Imaging System, Air Force Institute of Technology, (2006). Google Scholar

20. 

B. NeffS. CainR. Martin, “Discrimination of multiple ranges per pixel in 3D FLASH LADAR while minimizing the effects of diffraction,” Proc. SPIE, 8165 81650J (2011). http://dx.doi.org/10.1117/12.891783 PSISDG 0277-786X Google Scholar

21. 

S. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, Prentice Hall, Upper Saddle River, New Jersey (2010). Google Scholar

22. 

M. RoggemanB. Welsh, Imaging Through Turbulence, 33431 CRC Press, Boca Raton, Florida (1996). Google Scholar

Biography

Brian J. Neff received his BS in electrical engineering from the University of Pittsburgh in 1999, his MS in electrical engineering from New York Polytechnic in 2006 and his MS in flight test engineering from the U.S. Air Force Test Pilot School in 2007. He is currently involved in doctoral research studies at the Air Force Institute of Technology. His research interests include 3-D laser radar, signal processing, and blind deconvolution.

Quentin D. MacManus received his BS in electrical engineering from Johnson and Wales University in 2000 and his MS in electrical engineering from the Air Force Institute of Technology in 2011. He is currently working in research and engineering in the area of image and signal processing.

Stephen C. Cain received his BS degree in electrical engineering from the University of Notre Dame in 1992 and the MS degree in electrical engineering from Michigan Technological University in 1994. He received the PhD degree in electrical engineering from the University of Dayton in 2001. He served as an officer in the United States Air Force from 1994 to 1997. He has held the position of senior scientist at Wyle Laboratories and senior engineer at ITT Aerospace/Communications Division. He is currently an associate professor of electrical engineering at the Air Force Institute of Technology.

Richard K. Martin received dual BS degrees (summa cum laude) in physics and electrical engineering from the University of Maryland, College Park, in 1999 and an MS and PhD in electrical engineering from Cornell University in 2001 and 2004, respectively. He is currently an associate professor at the Air Force Institute of Technology (AFIT), Dayton, OH. His research interests include multicarrier equalization, blind and/or sparse adaptive filters, nonGPS navigation, source localization, image registration, and laser radar.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Brian J. Neff, Quentin D. MacManus, Stephen C. Cain, and Richard K. Martin "Image deblurring and near-real-time atmospheric seeing estimation through the employment of convergence of variance," Journal of Applied Remote Sensing 7(1), 073504 (20 September 2013). https://doi.org/10.1117/1.JRS.7.073504
Published: 20 September 2013
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Sensors

Signal to noise ratio

Optical transfer functions

Atmospheric sensing

Deconvolution

Point spread functions

Detection and tracking algorithms

Back to Top