|
1.IntroductionBoth military and civilian applications have driven a significant amount of research on enhancing the quality of images received from optical sensors degraded by the effects of diffraction. When imaging through atmospheric turbulence, the representation of the remote scene is degraded by the diffraction of light from neighboring areas. Various techniques to illuminate and capture the remote scene can be employed by a sensor, and the chosen technique is often driven by the intended application of the sensor as shown by the following examples. In this article, we focus on applications associated with military targeting sensors. Current state-of-the-art military targeting sensors typically detect light in the infrared spectrum. This allows for numerous advantages, such as easier separation of man-made versus natural targets as well as the ability to obtain high quality images in low light conditions. Another class of sensor known as a LAser Detection And Ranging (LADAR) system captures a target’s reflection from a laser illumination source. This class of sensor is receiving significant attention as improvements to the technology are made. In addition to the ability to image in low light environments, some classes of LADAR sensors also allow for the collection of three dimensional (3-D) images. Regardless of the sensor type, there are substantial benefits to image deblurring in military applications. Benefits include minimizing the risk of misidentifying a target and perhaps accidentally inflicting damage to nonhostile targets. Ultimately, the algorithm employed to remove atmospheric turbulence effects may depend on the type of sensor since algorithm design is commonly based on statistical models of the received data in addition to processing time limitations. Due to various physical phenomena, images collected by each class of sensor may have differing noise distributions. For instance, many traditional imaging sensors collect images from incoherent illumination sources where the intensity closely follows a Poisson distribution due to the random photon arrival rate at the detector. The images received by a LADAR sensor often have more of a negative binomial distribution due to the constructive and destructive interference associated with the highly coherent illumination source.1 In some cases, the electronics used to capture an image may drive the received images to have more of a normal distribution. Regardless of the class of sensor and associated noise distribution, a common technique for improving the signal to noise ratio (SNR) of the received image is the process of multiple frame averaging.2 Many imaging systems currently employed or in development have high enough frame rates to capture and produce multiple frame image ensembles consisting of 10 to perhaps hundreds of images over a short period of time. With proper registration, the summation of these images will tend to average out the effects of noise and thus improve the overall SNR. This technique is commonly required in situations where the number of photons received from a target is low, as in the case of imaging stellar targets or remote scenes at long slant ranges through the atmosphere. Assuming proper registration, the imaging sensor can commonly be described as shift invariant. This allows us to model the received image as the convolution between the remote scene and the point spread function (PSF). The frequency representation of the PSF in an average sense is the product of the diffraction limited optical transfer function (OTF) and the atmospheric blur.2 Finally, when using motion compensated frame average (MCFA) images comprised of the summation of numerous short-exposure frames, the atmospheric blur can accurately be described using the short-exposure OTF.3 A key advantage associated with the employment of the average short-exposure model is that the entire blurring function is reduced to a single unknown parameter (Fried’s seeing parameter—) which this research will focus on accurately identifying. Current military targeting sensors employed in a tactical environment do not account for the effects of atmospheric blur. Reasons for this include the fact that blind deconvolution algorithms are often computationally expensive and time intensive. Additionally, adaptive optics are often too complex to employ in a highly dynamic environment. Deconvolution algorithms such as the Richardson–Lucy (RL) deconvolution algorithm where the PSF is known are generally accepted to perform better than blind-deconvolution algorithms where the PSF is unknown. However, in a tactical environment where the targeted scene and imaging sensor position are continually changing, it is practically impossible to have prior knowledge of the PSF. Considerable work has been accomplished in blind deconvolution for incoherently illuminated remote scenes.4–11 The shear magnitude of this research supports its importance for a variety of image processing disciplines. The problem we are attempting to solve was previously addressed by MacDonald.2,12 This work by MacDonald focused on undoing the distorting effects of atmospheric turbulence in short exposure, partially coherent imaging of remote scenes at long slant ranges. Key contributions of the algorithm originally presented by MacDonald are as follows. A method was developed for estimating both an enhanced remote scene as well as an average OTF parameterized by the atmospheric seeing parameter where the noise was dominated by the constructive and destructive interference caused by the partially coherent illumination source. Although the effort was originally focused on providing an enhanced image, an equally important contribution was the ability to estimate Fried’s seeing parameter where scintillometry or other measurement techniques were unavailable or impractical. Finally, the algorithm was tailored toward sensors employed in a tactical environment where both processing capability and available time are limited. For these reasons, we will compare our results to MacDonald’s work as his work represents the only prior attempt to solve the problem of parameterized blind deconvolution to recover atmospheric seeing. The organization of this article is as follows. A brief description of the system model to include noise and atmospheric turbulence effects on the received image will be provided in Sec. 2. In Sec. 3, we will discuss various techniques for removing atmospheric blur maximum a priori (MAP) estimator developed for blind estimation of .12 Additionally, a novel technique will be presented which improves upon the ability to blindly estimate . A performance comparison of the two blind algorithms on both fully and partially illuminated simulated scenes distorted by various noise distributions will be provided in Sec. 4. Finally, the results will be validated with experimental data as shown in Sec. 5. 2.BackgroundDue to numerous physical phenomena, the images captured by a sensor intended to represent a remote scene have imperfections. First, we know that optical imperfections with the system can cause the diffraction of light to neighboring areas. When operating a sensor within the atmosphere, optical imperfections can commonly be binned into those directly related to the manufacturing of the sensor and those that result from atmospheric turbulence. Second, the received image is generally further degraded by the effects of noise. This noise can also result from numerous sources such as read out noise, thermal noise, and noise associated with the illumination source. The total PSF or spatial impulse response of an optical sensor accounts for the diffraction effects directly attributed to the sensor optics, , and those that can be attributed to atmospheric turbulence, . This total PSF, is the convolution of the two primary components as shown in Eq. (1) in one-dimensional (1-D), where is the spatial position or image pixel location. The impulse response of the optics can be computed by conducting a propagation experiment using known sensor parameters as shown in Ref. 1. In frequency space, the total OTF, , the optical OTF, , and atmospheric OTF, , are simply the Fourier transforms of their respective PSFs, and the convolution operator is replaced by the multiplication operator where is the spatial frequency.As a general rule of thumb, when collecting images with exposure times of less than of a second, we can assume that the atmosphere through which the remote scene is viewed remains constant.3 Due to the ill-posed nature of blind deconvolution with two-dimensional (2-D) images, it is mathematically impossible to directly solve for the PSF impacting collected images when noise is present. Despite this hurdle, numerous algorithms have been developed to circumvent these mathematical challenges with considerable success by making various assumptions or approximations.4–11 One technique for simplifying this problem considers a parameterization of the OTF. When using MCFA compilations of images where each individual image has an exposure time of less than of a second, the average short-exposure OTF, , is reduced to a function of a single unknown parameter, Fried’s seeing parameter, , as shown in Eq. (3). In the mathematical model for as a function of spatial frequency, the mean wavelength of light detected is , is the focal length of the lens, and is the aperture diameter of the sensor. For purposes of this research, we will substitute as our model for the atmospheric OTF, . However, the technique developed in Sec. 3.3 could also be demonstrated to work with the simpler long-exposure case where the image frames are averaged without motion compensation. The average long-exposure OTF, , is shown in Eq. (4).3 Imaging devices exhibit a cutoff frequency dictated by their optical specifications according to Eq. (5). In this diffraction-limited case, the maximum spatial frequency, , can be computed by where is the wavelength of the light of interest. When there are benefits to using long-exposure imaging in certain scenarios such as astrophotography, the loss of frequency content is often an undesired side effect. In a tactical military scenario or any other dynamic environment, it would be impractical if not impossible to point a sensor long enough at a target to warrant the use of long-exposure imaging. Figure 1 compares the frequency response for a sensor that is diffraction limited to the frequency response considering a long- and short-exposure OTF for two levels of atmospheric seeing. It is clearly evident that, provided the scenario allows, there are inherent benefits with respect to frequency content when using properly registered short-exposure images. The parameters for this demonstration were chosen to match those that will be employed to obtain the experimental results listed in Sec. 5 and are listed in Table 1.Table 1Optical system specifications.
3.Image DeconvolutionThe process of deconvolution is commonly performed on collected images, , that are degraded by a PSF, , and additive noise, . The goal of the process is to recover the true representation of the remote scene, , shown in Eq. (6). As previously mentioned, there are a plethora of algorithms designed to aid in this problem. Perhaps the most widely accepted or recognized algorithm for image deconvolution when the collected images follow a Poisson distribution is the RL algorithm. 3.1.Richardson–Lucy Deconvolution AlgorithmOne of the key benefits of the RL algorithm where for a wide variety of imaging applications in the presence of Poisson statistics, is that if the algorithm converges, it will converge to the maximum likelihood estimate (MLE).13–15 In Eq. (7), are the coordinates for the remote scene, are the coordinates in the detector plane, is an estimate for , and is the expected value of the intensity received at each detector pixel given a specific PSF. However, a commonly cited drawback to this technique is the noise amplification that occurs as the number of iterations increases.9,16 If allowed to iterate indefinitely, the estimate for would be only noise. Therefore, we must stop iterations before noise amplification occurs. Clearly, this could be accomplished interactively by the user; however, for an automated or blind routine, any required user interaction would be undesirable. The method proposed in this work will rely on the convergence of the estimate of the noise power and the predicted variance of the collected data to cease iterations. Convergence of variance as a criteria to cease iterations has been used previously with success;12,17 however, the novelty in this work is that we will also use the convergence of variance to identify the best model for the PSF parameterized by .3.2.Blind Estimate of Seeing Via MAP TechniqueThe idea of using the RL algorithm in a blind fashion to deblur an image was previously presented by Fish et al.4 However, their work did not employ the theoretical models for the long- and short-exposure transfer functions parameterized by .3 Perhaps the most similar work to this research was accomplished by MacDonald. He developed a blind technique that was iterative in nature like the RL algorithm, yet he introduced a priori information for images distorted by speckle noise following more of a negative binomial distribution.12 MacDonald introduced a priori information for the distribution of in hopes of maximizing the likelihood at the appropriate level of seeing. If we assume independence of the measurements for every pixel in the detector array, we can state the joint probability of the observed noisy image, , as Ideally, we would like to maximize the likelihood or log likelihood over a range for to identify the appropriate PSF. Unfortunately, likelihood continually increases with as illustrated by the following example in Fig. 2. When the RL algorithm maximizes likelihood for a given PSF, the maximum likelihood (ML) solution for does not necessarily occur when the correct PSF parameterized by is chosen. This is the direct problem that MacDonald sought to solve by introducing a priori information for the distribution of .MacDonald hypothesized that a distribution could be applied to the value for based on the intuitive observation that the seeing is seldom extremely better than the average and can often be worse. The form of the probability density function for the random parameter was assumed to be where is the average atmospheric seeing and is the number of pixels in the detector array. When applying this technique to the example illustrated in Fig. 2, we see that likelihood is maximized near the correct value of as shown in Fig. 3. Although the technique is successful in this scenario, two mathematical challenges remain. First, the choice of an exponential distribution for is probably inaccurate. Extremely low values of are expected to be nearly as unlikely as high values. Second, the effect of scaling the exponential density function by the number of pixels in the detector array is difficult to justify for partially illuminated scenes such as astronomical images. The convergence of variance technique will remove the requirement for the prior distribution on and allow us to directly converge to the correct value.3.3.Blind Estimate of Seeing Via Convergence of Variance TechniqueThe convergence of variance technique works on the premise of searching for the best possible PSF parameterized by in the amount of time available for processing. Given more time, the technique will provide a more refined estimate for . We will first explain this technique in more detail using the assumption that the images collected follow Poisson statistics. Later, we will demonstrate the technique using images that follow a negative binomial noise model. Ultimately, the technique should work regardless of the noise distribution assuming the correct iterative deblurring algorithm and convergence criteria are used. This technique relies on a comparison of variance between the image estimate, , and the collected image, . At this point, we can assume that any further iterations would only serve to fit the estimates to the noise. In other words, iterations would cease when where is the actual image variance. Assuming the collected MCFA follows Poisson statistics, the deblurring algorithm employed would be the Richardson–Lucy algorithm in Eq. (7), andThe relationship in Eq. (13) is allowed since the assumption can be made that the intensity captured by each detector is independent of other detectors, and each intensity can essentially be thought of as an independent Poisson random variable. The summation of these random variables is, therefore, a good approximation for the total image variance. Due to the photon counting nature of many imaging applications, the Poisson distribution is often employed as a statistical model for the detected images. However, due to the highly coherent nature of laser light, images detected by a LADAR sensor often follow more of negative binomial distribution. Fortunately, the robustness of the convergence of variance technique allows for employment in this scenario as well. MacDonald derived an iterative MLE where the noise is dominated by laser speckle as where is the coherence parameter of the light.12 Using the deblurring algorithm in Eq. (14), we would again iterate until the relationship in Eq. (12) is satisfied whereEquation (15) represents the variance for a negative binomial random variable, which should well approximate the variance in laser illuminated imagery. The diagram in Fig. 4 demonstrates how this technique could be employed in an operational scenario where processing time and computational power are limited. In this scenario, images are collected and fed into the estimation process. At any point in time, the best possible estimate for can be drawn upon for deblurring an image. However, in parallel, the outer loop will continue to work on characterizing the current atmospheric seeing conditions. One of the key advantages to this algorithm is that it is easy to parallelize. Even with a common home computer that has a multicore processor, it is easy to simultaneously test multiple values of for convergence. The to employ in the deblurring algorithm would be the lowest value of for which Eq. (12) is satisfied. Additionally, we can further enhance the process by first conducting a coarse estimate of followed by a more refined estimate as indicated in Fig. 5. MacDonald references the employment of a convergence test for ceasing iterations in his algorithm; however, it is apparent that he did not utilize this test as a sufficient criteria for identifying the correct value of atmospheric seeing. As previously mentioned, if allowed sufficient time the relationship in Eq. (12) will be satisfied for the correct and all higher values. However, the criteria will never be attained for low estimates of . The following sections will demonstrate the employment of this technique on images with Poisson and negative binomial noise to show that a priori information is not required to achieve accurate estimates for . 4.Simulation ResultsThe following results will demonstrate the utility of the convergence of variance technique and compare the results to the algorithm developed by MacDonald for images with Poisson and negative binomial noise. The optical specifications listed in Table 2 and used for the simulations were not limited to what could readily be obtained for experimentation. Rather, the specifications were chosen to mimic what could potentially be incorporated into a targeting pod design based on size limitations. The specifications will allow for properly sampled images according to Eq. (5). We will first consider simulation results using a fully illuminated scene. We will then simulate conditions for astrophotography where the scenes are only partially illuminated. Table 2Simulated system specifications.
4.1.Fully Illuminated ScenesIn this section, we will present results from images that are fully illuminated. By fully illuminated, we mean that the overwhelming majority of the scene is not dark and contains varying levels of contrast. Recall that MacDonald’s algorithm defined a prior for of exponential form scaled by the total number of pixels in the detector as shown in Eq. (11). In the following examples, the cameraman photo built into Matlab® is blurred using a total OTF that is the product of the diffraction-limited OTF and an average short-exposure OTF with various levels of . Multiple trials will be conducted with MCFA images composed of 1, 10, 20, 30, 50 and 100 individual frames with independent realizations of Poisson noise to demonstrate the effects of SNR on each algorithm. The original, blurred and recovered images are shown in Fig. 6. In order to implement MacDonald’s algorithm, we either need an initial estimate on the average value for atmospheric seeing, , or we can initialize it to the aperture diameter if no estimate can be made. For purposes of fair comparison, we will assume that no prior estimates are known for atmospheric seeing, and will be initialized to the aperture diameter. Table 3 shows that the convergence of variance technique and MacDonald’s algorithm produce nearly identical results with the only exceptions highlighted in bold. MCFA images consisting of more frames take longer to converge due to the higher intensities at each pixel associated with the summation of individual frames. When the algorithm does not always converge to the true value of , the estimated value was always within 0.5 cm of the true value. The estimates for often appear to be lower than the true value, and this is likely due to the algorithm’s attempt to remove minor focus errors that were not accounted for when assigning the image as truth data. The PSF arising from minor focus error will have a similar shape to the PSF arising from atmospheric turbulence.18 Therefore, minor focus error could contribute to the low estimates for . This assessment is drawn from and supported by the fact that ideally simulated scenes using patterns and point sources as demonstrated in Sec. 4.2 never converge below the true value regardless of the number of iterations the algorithm is allowed to perform provided we have an adequate SNR. Table 3Results for fully illuminated image simulation with Poisson noise (true r0=2.6 cm).
The performance of the convergence of variance technique is based on the quality of the blurred image and the amount of time available for processing. Provided enough time is allowed, Eq. (12) will be satisfied for the best estimate of . However, allowing too much time does not present a problem for this algorithm. By observing the difference between the left-side and right-side of Eq. (12) at each iteration, we can update an estimate for the number of remaining iterations required for convergence, EI, using where represents the iteration number, BV is the variance of the collected image, and IV is the mean square error between the collected images and the non-noisy estimate. Essentially the relationships in Eq. (16) are used to predict how long the algorithm will have to iterate based on the current rate of convergence. Figure 7 demonstrates that when the value of is too low, the estimated number of remaining iterations diverges.We now repeat this experiment in the presence of negative binomial noise to simulate the expected results from laser illuminated imagery. Figure 8 again demonstrates the results using an MCFA consisting of 30 frames and a cap on the maximum iterations set to 5000. A close inspection of the blurred/noisy image reveals a significant and visible increase in overall noise variance. As expected, this does impact the final results. Table 4 summarizes the results obtained for the convergence of variance technique, as well as MacDonald’s algorithm using the introduction of a prior for the distribution of . However, in the case of negative binomial noise, the differences in the results are more significant. By introducing the prior, the tendency to underestimate is more pronounced. This trend of underestimation of was also noticed by MacDonald.19 Again, it is expected that focus error in the original image is a contributing factor in the underestimation of . This presents an interesting topic for potential future research. Table 4Results for fully illuminated image simulation with negative binomial noise (true r0=2.6 cm).
At this point, the functionality of the convergence of variance technique has been demonstrated for fully illuminated scenes. Further experimentation was conducted on simulated stellar targets to identify the potential for measurement of atmospheric seeing on scenes where the majority of the image consists only of background illumination and noise. Since these targets are fully simulated, and thus inherently perfectly focused, underestimation of was not expected to be a problem for the convergence of variance technique. Provided the algorithm is allowed enough time to iterate, convergence to the correct should be achieved. However, the scaling factor of in the prior Eq. (11) was expected to still cause some bias in the estimates for using MacDonald’s algorithm. 4.2.Partially Illuminated ScenesSpace situational awareness (SSA) is a key mission of the United States Air Force Space Command. One aspect of SSA involves using both telescope networks and radars to detect, identify, record, and track all man-made objects orbiting the earth. Knowing the exact locations of these orbiting objects in space is crucial for future space operation safety. The SSA mission has become even more important with recent events such as the Iridium and Cosmos satellite collision and the Chinese antisatellite missile test in 2007, both of which created large swaths of space debris. This debris will be an ongoing risk to US satellites for years to come as the orbits of the debris degrade toward Earth. This is merely a single example and justification of the importance for deblurring techniques applicable to astronomical images. The following simulations will consider three separate target configurations. We will look at a single point source that could be representative of a star, a scene that has multiple point sources arranged throughout the image and finally, we will look at a cross bar pattern, as in Fig. 9. We will again consider both Poisson and negative binomial statistics with various SNR levels. The testing in Sec. 4.1 revealed that both algorithms are impacted by SNR and focus error. However, the simulations also revealed that no gain in performance was realized through the introduction of the prior for and that the convergence of variance technique exhibited promise for estimation of as well as a deblurred scene. The results that follow add support for the hypothesis that the underestimation of was a function of both SNR and focus error. The SNR for the various MCFAs used in the following simulations is identified in Table 5. As expected, the SNR for MCFAs with Poisson noise is higher than the SNR for MCFAs with negative binomial noise. Table 5Signal to noise ratio for simulation data.
Tables 6 and 8 demonstrate the utility of the convergence of variance algorithm on partially illuminated scenes with Poisson and negative binomial noise, respectively. From these results, we conclude that, provided enough frames are properly registered and averaged to provide adequate SNR and enough time is allowed for convergence, the value of can be estimated to within 1 mm. In cases where we have low SNR, the algorithm will tend to underestimate, but this is expected since the noise power in the images masks some of the high frequency content. If insufficient time is allowed for convergence, the algorithm will produce a high estimate for , as can be observed when the cap for iterations was limited to 1,000. Additionally, we notice that in cases of adequate SNR, we do not have a problem of underestimation of since focus error was not present in these images. Table 6Results for partially illuminated image simulation with Poisson noise (true r0=3.3 cm).
For demonstration purposes, the algorithm was allowed a cap of 1,000,000 iterations. Even under these conditions, underestimation was never a factor for images with adequate SNR using the convergence of variance technique. At this point, it is unknown if it would be possible to predict the precise level of SNR at which the algorithm will converge to the correct value of since the relationship appears to be scene or contrast dependent. However, we can conclude that higher levels for SNR will yield better results. Additionally, even at low values of SNR, we achieve reasonable estimations for the deblurred scene and using the convergence of variance method. On the other hand, by introducing a prior for the distribution of , underestimation is more prevalent. In Fig. 10, we demonstrate the performance of the convergence of variance technique for the three target types in the presence of Poisson noise. Using the RL deconvolution algorithm with a cap on the number of iterations set to 10,000 and an MCFA consisting of 30 frames, we were able to recover the correct in all cases. For the point source targets, the blurring effects of the simulated atmosphere reduce the intensity to the point that it is difficult to visually identify the various point sources. However, when deconvolution is completed, each of the sources is easily identified. For the cross bar pattern, the process of deconvolution makes it much easier to identify the structure of the target pattern. Although it may seem that a cap of 10,000 iterations is unreasonable, the algorithm will only take as much time as needed to converge within this upper bound. For instance, if the termination criteria is met before the upper bound on iterations is achieved, the algorithm will terminate. Even with a standard home computer with a 2.7 GHz Intel® Core™ i5 processor and 16 GB of memory, convergence was achieved in a reasonably short period of time for all target types as shown in Table 7. Again, as indicated in Fig. 5, the algorithm could be manually interrupted sooner if needed, at which point, the lowest value of that has allowed convergence would be returned as the answer. For instance, in the example shown in Fig. 10, if we interrupted the routine after 4.26 s for the cross bar pattern, we would get an estimated of 4 cm. After a total of 10 s, the estimate would be within 0.5 cm of the truth at 3.8 cm. The estimate continues to be refined until the best estimate is achieved after 21.77 s. As demonstrated in Table 6, convergence at a value less than the truth is not an issue provided we have adequate SNR, and the image is not further degraded by optical aberrations such as focus error. Table 7Convergence times for simulations shown in Fig. 10.
Table 8Results for partially illuminated image simulation with negative binomial noise (true r0=3.3 cm).
4.3.Extension of Convergence of Variance Technique for 3-D FLASH Laser Radar Images3-D LADAR technology is receiving an increased interest as the technology improves. Currently, the commercially available sensors are severely undersampled and do not experience the effects of diffraction from atmospheric turbulence. However, as the technology continues to progress, it is expected that minimizing the effects of atmospheric turbulence will be important. Conversely, certain applications such as the imaging and tracking of space debris may require an optics configuration where the current sensors would receive properly sampled images. In those cases, the convergence of variance technique could easily be applied to identify the PSF parameterized by that will deblur the scene. A typical full-waveform 3-D LADAR image is comprised of multiple 2-D images or frames separated by a small time delta. Therefore, the 3-D image can be flattened into a 2-D image by simply removing the range information and accumulating the intensity information for each pixel for the series of individual frames. Fortunately, the atmosphere can be considered static for the laser illuminator pulse duration and subsequent detector integration times that are common to 3-D LADAR sensors.1,3 Based on this premise, this technique was originally explored as a means of identifying the best PSF parameterized by to be employed in algorithms such as the multiple surface FLASH LADAR ranging algorithm.20 5.Experimental ResultsThe optical configuration shown in Fig. 11 was used to obtain properly sampled images. The specifications for this setup are listed in Table 1. The camera used in this configuration allowed for a 16-bit analog to digital conversion, and experiments show that it acts as a photon counting device in lower intensity regions. This allows for utilization of the Poisson and negative binomial statistical models without applying a conversion factor to the digitized images. However, as the detector approached higher intensity thresholds, the conversion between digital counts and photons became nonlinear. For that reason, images were taken in low light conditions to ensure the convergence of variance technique could be applied. In order to create a turbulent atmosphere to image through, a heat source was directed in front of the telescope aperture. This technique allowed for the generation of repeatable turbulent atmospheres with values in the subcentimeter range. Without the use of this heat source, all of the images would likely have been at or near the diffraction limit making validation of the convergence of variance technique difficult. The experiments for fully illuminated scenes will use the image in Fig. 12 as a target. This target will be placed indoors, 10 m from the sensor where turbulence and illumination can be controlled. The incorporation of a step in the bottom portion of the scene will permit the measurement of the true for comparison with the estimated values. 5.1.Fully Illuminated Scenes—Natural LightIn the following two experiments, the remote scene was fully illuminated by natural lighting. Given that the light source was incoherent, the Poisson statistical model for the photon arrival rate applies.3 We will use the RL deconvolution algorithm (7) with PSFs parameterized by a range of values from 0.1 cm up to the aperture diameter of 5 cm. In Fig. 13, we demonstrate the ability to measure by measuring the step response from the collected MCFA. The impulse response is then found by taking the derivative of this measured step response. Once we have the impulse response, we can vary per the relationship in Eq. (2) to find the theoretical total PSF, , that minimizes the error between the measured impulse response and the theoretical impulse response. In the first experiment shown in Fig. 14, was measured at 0.4 cm. Using the convergence of variance technique, we obtain an estimate of for an error in estimation of only 1 mm. At this point, it is important to note that the edges of the deblurred images are distorted due to the implementation of the RL algorithm using the 2-D fast Fourier transform in Matlab. This implementation was chosen in order to decrease the time required for execution. Therefore, when computing the variance per the relationship in Eq. (12), the edges were ignored. In the collected MCFA, it is difficult to discern the smaller font sizes. However, when deconvolution is conducted with an OTF parameterized by the estimate, it is possible to identify each of the characters. In the second experiment shown in Fig. 15, was measured at 1.1 cm. Using the convergence of variance technique, we obtain an estimate of . Although the blurring due to turbulence was less severe in this experiment, improvement in sharpness of the characters is again noted when deconvolution was conducted using the estimate for . 5.2.Fully Illuminated Scenes—Laser IlluminationIn the following two experiments, the remote scene was illuminated using a laser with a wavelength of 630 nm. Given that the light source was partially coherent, with a measured coherence parameter, , the negative binomial statistical model for speckle noise applies.3 We will use the ML estimator developed by MacDonald (14) with PSFs parameterized by a range of values from 0.1 cm up to the aperture diameter of 5 cm. In the first experiment shown in Fig. 16, was measured at 0.5 cm. Using the convergence of variance technique, we obtain an estimate of . In the collected MCFA, it is difficult to discern the smaller font sizes and based on where the illumination spot was centered, the top two rows of text are nearly illegible. However, when deconvolution is conducted with an OTF parameterized by the estimate for , it is possible to identify most of the characters. It is much easier to identify the row of Qs, and the top row of Es is faintly visible. In the second experiment shown in Fig. 17, was measured at 1.1 cm. Using the convergence of variance technique, we obtain an estimate of . When the blurring due to turbulence was less severe in this experiment, significant improvement is again noted when deconvolution was conducted using the estimate for . The results for the fully illuminated image experiment are summarized in Table 9. 5.3.Correlation Technique for Measurement of Atmospheric SeeingWith the previous experimental setups, the true value of could be measured by imaging a step target and by taking the derivative to find the impulse response as shown in Fig. 13. The step target could be placed in line with the desired remote scene such that the measurements were made through nearly identical columns of turbulent air. However, when trying to measure the true of stellar images for comparison with the experimental results, this was not a viable solution. We could average many short exposure images of a single star in order to get the average short-exposure OTF parameterized by ; however, this requires precise tilt removal and any shift estimation errors will appear as attenuation of the short-exposure OTF and underestimation of . Trying to average enough frames to achieve the long-exposure OTF in order to find would likely take thousands of images to converge upon the optimal solution. Based on the frame rate of the experimental collection system, it is possible that the value for would change in the time required to gather this amount of data. Therefore, we considered an alternative technique for measuring the value of atmospheric seeing that considers the cross correlation of the collected images. By considering all possible combinations of the cross correlations between a series of individual short exposure images, , taken of a star, we can find the cross power spectral density (CPSD), , where In other words, the CPSD is the correlation between the normalized Fourier transforms of all possible image combinations.21 For a sequence of images, there are a total of nonredundant cross correlations. Additionally, the CPSD of the blurred point source can be shown to have the following relationship: where is the speckle transfer function.22 Fortunately, the speckle transfer function can also be parameterized by . Therefore, in order to obtain the true value for atmospheric seeing, we can find the value of that minimizes the error in Eq. (18). At this point, we must keep in mind that this technique for measuring is limited to cases where we are imaging a point source. We will use this technique to demonstrate that the convergence of variance technique will in fact identify the correct .5.4.Partially Illuminated ScenesFor the experiments involving partially illuminated scenes, we chose to image a star. Stars can essentially be considered point sources of light, allowing us to use the technique presented in Sec. 5.3 to obtain truth data for comparison with the convergence of variance results. Although the resultant deblurred image is intuitive, the estimated values for prove that the technique can be successfully used to measure for partially illuminated scenes. For the experiments, we chose a relatively bright star near Polaris to image. This minimized the relative motion between the field of view and the imaged portion of the sky. All individual images were taken using an exposure time of 0.001 s to ensure that the short-exposure model was applicable.3 Additionally, each of the MCFAs is a compilation of 20 individual frames. Some experimentation with MCFAs consisting of more than 20 frames was accomplished. However, no increase in performance was observed. In Table 10, the estimated value for was always within 0.2 cm of the measured value. Initially, the cap on the number of iterations was set to 1000. In all cases, the algorithm converged within the first 100 iterations for this scene. We then increased the number of iterations to 10,000 and then 20,000 to see if it had an impact on the results, but it did not. Table 9Summary of results for fully illuminated image experiments.
Table 10Results for partially illuminated image experiments.
As described in Fig. 5, we first conduct a course search with a step size of 1 cm, revealing that the minimum value of that will allow convergence is 3 cm. We then step through with a smaller step size and show that convergence can be achieved for values of as low as 2.4 cm. For this image, convergence occurs after just 68 iterations for an of 2.4 cm. However, it is divergent for anything less than this value. In Fig. 18, we show the resultant deblurred image when using this best estimate for in conjunction with the RL deconvolution algorithm and the average short-exposure OTF. As expected, the image estimate approaches more of a point source than the original image. 6.ConclusionsThe focus of this work was to develop a blind deconvolution technique that could be employed in a tactical military environment where both time and computational power are limited. The convergence of variance technique detailed above allows for rapid and accurate estimations of the atmospheric OTF parameterized by the seeing parameter, . As shown in Fig. 4, the technique can be interrupted after any amount of time, at which point the best available results would be provided. If more time is provided, the results are simply enhanced. Additionally, the convergence of variance technique reduces the possibility of noise amplification common with iterative deconvolution algorithms by ceasing iteration once the statistically predicted variance is achieved. An interesting discovery was also made through the course of this effort and is highlighted in Sec. 4.1. This technique will also attempt to recover from a minor focus error in the collected images due to the similarity between the atmospheric OTF and the OTF that arises from a minor focus error. As a result, the estimates for may be lower than the true value. Therefore, if this algorithm is to be used for atmospheric seeing measurement, the images must be in focus. A potential topic for future research would be the relationship between the seeing parameter and focus interaction for varying levels of focus error and atmospheric turbulence. AcknowledgmentsThe authors would like to acknowledge the Air Force Office of Scientific Research (AFOSR) for their continued funding of research within this discipline. The views expressed in this paper are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the United States government. ReferencesR. RichmondS. Cain, Direct-Detection LADAR Systems, SPIE, Bellingham, Washington
(2010). Google Scholar
A. MacDonaldS. CainM. Oxley,
“Binary weighted averaging of an ensemble of coherently collected image frames,”
IEEE Trans. Image Process., 16
(4), 1085
–1100
(2007). http://dx.doi.org/10.1109/TIP.2007.891774 IIPRE4 1057-7149 Google Scholar
J. Goodman, Statistical Optics, Wiley Classics Library, John Wiley and Sons, New York
(2000). Google Scholar
D. FishA. BrinicombeE. Pike,
“Blind deconvolution by means of the Richardson-Lucy algorithm,”
J. Opt. Soc. Am., 12
(1), 58
–65
(1995). JOSAAH 0030-3941 Google Scholar
T. SchulzB. StriblingJ. Miller,
“Multiframe blind deconvolution with real data:imagery of the Hubble space telescope,”
Opt. Express, 1
(11), 355
–362
(1997). http://dx.doi.org/10.1364/OE.1.000355 OPEXFF 1094-4087 Google Scholar
T. Schulz,
“Multiframe blind deconvolution of astronomical images,”
J. Opt. Soc. Am. A, 10
(5), 1064
–1073
(1993). http://dx.doi.org/10.1364/JOSAA.10.001064 JOAOD6 0740-3232 Google Scholar
A. CarassoD. BrightA. Vladar,
“Apex method and real-time blind deconvolution of scanning electron microscope imagery,”
Opt. Eng., 41
(10), 2499
–2514
(2002). http://dx.doi.org/10.1117/1.1499970 OPEGAR 0091-3286 Google Scholar
A. Carasso,
“Apex blind deconvolution of color hubble space telescope imagery and other astronomical data,”
Opt. Eng., 45
(10), 107004
(2006). http://dx.doi.org/10.1117/1.2362579 OPEGAR 0091-3286 Google Scholar
L. Carvalhoet al.,
“Comparison of image restoration methods applied to inland aquatic systems images aquired by HR CBERS 2B sensor,”
in Geoscience and Remote Sensing Symposium (IGARSS) 2010 IEEE International,
523
–526
(2010). Google Scholar
T. ChanC. Wong,
“Total variation blind deconvolution,”
IEEE Trans. Image Process., 7
(3), 370
–375
(1998). http://dx.doi.org/10.1109/83.661187 IIPRE4 1057-7149 Google Scholar
E. LamJ. Goodman,
“Iterative statistical approach to blind image deconvolution,”
J. Opt. Soc. Am. A, 17
(7), 1177
–1184
(2000). http://dx.doi.org/10.1364/JOSAA.17.001177 JOAOD6 0740-3232 Google Scholar
A. MacDonaldS. CainE. Armstrong,
“Image restoration techniques for partially coherent 2-D LADAR imaging systems,”
Proc. SPIE, 5562 10
–18
(2004). http://dx.doi.org/10.1117/12.560278 PSISDG 0277-786X Google Scholar
W. Richardson,
“Bayesian-based iterative method of image restoration,”
J. Opt. Soc. Am., 62
(1), 55
–59
(1972). http://dx.doi.org/10.1364/JOSA.62.000055 JOSAAH 0030-3941 Google Scholar
L. Lucy,
“An iterative technique for the rectification of observed distributions,”
Astron. J., 79
(6), 745
(1974). http://dx.doi.org/10.1086/111605 ANJOAA 0004-6256 Google Scholar
L. SheppY. Vardi,
“Maximum likelihood reconstruction for emission tomography,”
IEEE Trans. Med. Imaging, 1
(2), 113
–122
(1982). http://dx.doi.org/10.1109/TMI.1982.4307558 ITMID4 0278-0062 Google Scholar
N. Deyet al.,
“A deconvolution method for confocal microscopy with total variation regularization,”
in IEEE Int. Symp. Biomedical Imaging: Nano to Macro,
1223
–1226
(2004). Google Scholar
J. McMahonR. MartinS. Cain,
“Three-dimensional FLASH laser radar range estimation via blind deconvolution,”
J. Appl. Rem. Sens., 4
(1), 043517
(2010). http://dx.doi.org/10.1117/1.3386044 1931-3195 Google Scholar
J. Goodman, Fourier Optics, 3rd ed.Roberts and Company, Greenwood Village, CO
(2005). Google Scholar
A. MacDonald, Blind Deconvolution of Anisoplantic Images Collected By A Partially Coherent Imaging System, Air Force Institute of Technology,
(2006). Google Scholar
B. NeffS. CainR. Martin,
“Discrimination of multiple ranges per pixel in 3D FLASH LADAR while minimizing the effects of diffraction,”
Proc. SPIE, 8165 81650J
(2011). http://dx.doi.org/10.1117/12.891783 PSISDG 0277-786X Google Scholar
S. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, Prentice Hall, Upper Saddle River, New Jersey
(2010). Google Scholar
M. RoggemanB. Welsh, Imaging Through Turbulence, 33431 CRC Press, Boca Raton, Florida
(1996). Google Scholar
BiographyBrian J. Neff received his BS in electrical engineering from the University of Pittsburgh in 1999, his MS in electrical engineering from New York Polytechnic in 2006 and his MS in flight test engineering from the U.S. Air Force Test Pilot School in 2007. He is currently involved in doctoral research studies at the Air Force Institute of Technology. His research interests include 3-D laser radar, signal processing, and blind deconvolution. Quentin D. MacManus received his BS in electrical engineering from Johnson and Wales University in 2000 and his MS in electrical engineering from the Air Force Institute of Technology in 2011. He is currently working in research and engineering in the area of image and signal processing. Stephen C. Cain received his BS degree in electrical engineering from the University of Notre Dame in 1992 and the MS degree in electrical engineering from Michigan Technological University in 1994. He received the PhD degree in electrical engineering from the University of Dayton in 2001. He served as an officer in the United States Air Force from 1994 to 1997. He has held the position of senior scientist at Wyle Laboratories and senior engineer at ITT Aerospace/Communications Division. He is currently an associate professor of electrical engineering at the Air Force Institute of Technology. Richard K. Martin received dual BS degrees (summa cum laude) in physics and electrical engineering from the University of Maryland, College Park, in 1999 and an MS and PhD in electrical engineering from Cornell University in 2001 and 2004, respectively. He is currently an associate professor at the Air Force Institute of Technology (AFIT), Dayton, OH. His research interests include multicarrier equalization, blind and/or sparse adaptive filters, nonGPS navigation, source localization, image registration, and laser radar. |