When imaging through the atmosphere, the resulting image contains not only the desired scene, but also the adverse
effects of all the turbulent air mass between the camera and the scene. These effects are viewed as a combination of nonuniform
blurring and random shifting of each point in the received short-exposure image. Corrections for both aspects of
this combined distortion have been tackled reasonably successfully by previous efforts. We presented in an earlier paper
a more robust method of restoring the geometry by redefining the place of the prototype frame and by reducing the
adverse effect of averaging in the processing sequence. We present here a variant of this method using a Minimum Sum
of Squared Differences (MSSD) cross-correlation registration algorithm implemented on a Graphics Processing Unit
(GPU). The raw speed-up achieved using GPU code is in the order of x1000. Two orders of magnitude speed-up on the
complete algorithm will allow for better fine tuning of this method and for experimentation with various registration
Surveillance imaging from long-range requires use of telescopic optics, and fast electro-optic sensors. The intervening air introduces distortion of the imagery and its spatial frequency content, and does so such that regions of the image suffer dissimilar distortion, visible in the first instance as a time varying geometrical warp, and then as region specific blurring or "speckle". The severity of this, and hence the reduction in size of regions exhibiting similar distortion, is a function of the field of view of the telescope, the height above ground of the imaging path, the range to the target, and climatic conditions.
Image processing algorithms must be run on the sequence of imagery to correct these distortions, on the assumption that exposure time has effectively "frozen" the turbulence. These are absent of knowledge of the actual scene under investigation. Successful algorithms do manage to correct the apparent warping, and in doing so they yield both information on the bulk turbulent medium, and allow for reconstruction of spatial frequency content of the scene that would have been lost by the capability of the optics had their been no turbulence. This is known as turbulence-induced super-resolution.
To confirm the success of algorithms in both correction and reconstruction of such super-resolution we have devised a field experiment where the truth image is known and which uses other methods to evaluate the turbulence for collaboration of the results. We report here a new algorithm, which has proved successful in satellite remote sensing, for restoring this imagery to quality beyond the diffraction limits set by the optics.
Methods to correct for atmospheric degradation of imagery and improve the "seeing" of a telescope are well known in astronomy but, to date, have rarely been applied to more earthly matters such as surveillance. The intrinsically more complicated visual fields, the dominance of low-altitude distortion effects, the requirement to process large volumes of data in near real-time, the inability to pre-select ideal sites and the desirability of ruggedness and portability all combine to pose a significant challenge.
Field Programmable Gate Array (FPGA) technology has advanced to the point where modern devices contain hundreds of thousands of logic gates, multiple "hard" processors and multi-gigabit serial communication links. Such devices present an ideal platform to tackle the demands of surveillance image processing.
We report a rugged, lightweight system which allows multiple FPGA "modules" to be added together in order to quickly and easily reallocate computing resources. The devices communicate via 2.5Gbps serial links and process image data in a streaming fashion, reducing as much data as possible on-the-fly in order to present a minimised load to storage and/or communication devices.
To maximise the benefit of such a system we have devised an open protocol for FPGA-based image processing called "OpenStream". This allows image processing cores to be quickly and easily added into or removed from the data stream and harnesses the benefits of code-reuse and standardisation. It further allows image processing tasks to be easily partitioned across multiple, heterogeneous FPGA domains and permits a designer the flexibility to allocate cores to the most appropriate FPGA. OpenStream is the infrastructure to facilitate rapid, graphical, development of FPGA based image processing algorithms especially when they must be partitioned across multiple FPGAs. Ultimately it will provide a means to automatically allocate and connect resources across FPGA domains in a manner analogous to the way logic synthesis tools allocate and connect resources within an FPGA.
The reconstruction of turbulence-affected images has been an active research topic in the field of astronomical imaging. Many approaches have been proposed in the literature. Recently, researchers have
extended the methods to the recovery of long-path territorial natural scene surveillance, which is affected even more by air turbulence. Some approaches from astronomical imaging also work well in the
latter problem. However, although these methods have involved statistics, such as a statistical model of atmospheric turbulence or the probability distribution of photons forming an image, they have not taken account of the statistical properties of natural scenes observed in long-path horizontal imagery. Recent research by others has made use of the fact that a real world image generally has a sparse distribution of its derivatives. In this paper, we investigate algorithms with such a constraint imposed during the restoration of turbulence-affected images. This paper proposes an iterative, blind deconvolution algorithm that follows a registration and
averaging method to remove anisoplanatic warping in a time sequence of degraded images. The use of a sparse prior helps to reduce noise, produce sharper edges and remove unwanted artifacts in the
estimated image for the reason that it pushes only a small number of pixels to have non-zero (or large) derivatives. We test the new algorithm with simulated and natural data and experiments show that it
In this paper, a new image denoising method which is based on the uHMT(universal Hidden Markov Tree) model in
the wavelet domain is proposed. The MAP (Maximum a Posteriori) estimate is adopted to deal with the ill-conditioned
problem (such as image denoising) in the wavelet domain. The uHMT model in the wavelet domain is applied to construct
a prior model for the MAP estimate. By using the optimization method Conjugate Gradient, the closest approximation to
the true result is achieved. The results show that images restored by our method are much better and sharper than other
methods not only visually but also quantitatively.
We report a test of the turbulence found in real-world, horizontal imaging under high magnification. The experiment
creates a double "star" on a test chart for use both with a SLODAR turbulence profiling instrument, and simultaneously
imaged using a very fast camera to determine traditional seeing parameters. Effects on a similarly located image are
investigated to determine the observed effects on the imagery as a function of turbulence location.
In our previous work we have demonstrated that the perceived wander of image intensities as seen through the
"windows" of each pixel due to atmospheric turbulence can be modelled as a simple oscillator pixel-by-pixel and a linear
Kalman filter (KF) can be finetuned to predict to a certain extent short term future deformations. In this paper, we are
expanding the Kalman filter into a Hybrid Extended Kalman filter (HEKF) to fine tune itself by relaxing the oscillator
parameters at each individual pixel. Results show that HEKF performs significantly better than linear KF.
Super-resolution (SR) recovery has become an important research area for remote sensing images ever since T.S. Huang
first published his frequency method in 1984. Because of the development of computer technology, more and more
efficient algorithms have been put forward in recent years. The Iteration Back Projection (IBP) method is one of the
popular methods with SR. In this paper, a modified IBP is proposed for Advanced Land Observing Satellite (ALOS)
imagery. ALOS is the Japanese satellite launched in January 2006 and carries three sensors: Panchromatic Remote-sensing
Instrument of Stereo Mapping (PRISM), Advanced Visible and Near Infrared Radiometer type-2 (AVNIR-2)
and Phased Array type L-band Synthetic Aperture Radar (PALSAR). The PRISM has three independent optical systems
for viewing nadir, forward and backward so as to produce a stereoscopic image along the satellite's track. While PRISM
is mainly used to construct a 3-D scene, here we use these three panchromatic low-resolution (LR) images captured by
nadir, backward and forward sensors to reconstruct one SR image.
When imaging through the atmosphere, the resulting image contains not only the desired scene, but also the adverse effects of all the turbulent air mass between the camera and the scene. These effects are viewed as a combination of non-uniform blurring and random shifting of each point in the received short-exposure image. Corrections for both aspects of this combined distortion have been tackled reasonably successfully by previous efforts. A potentially more robust method of restoring the geometry is presented, which is also better suited to real-time implementation. The improvements were achieved by replacing the concept of prototype frame with the sequential registration of each frame with its nearest neighbour and the accurate accumulation of shiftmaps from any one frame to another without redundant calculations.
The possibility of obtaining spatial frequency information normally excluded by an aperture has been surmised, experimentally obtained in the laboratory, and observed in processed real world imagery. This opportunity arises through the intervention of a turbulent mass between the stationary wide-area object of interest and the short exposure, imaging instrument, but the frequency information is aliased, and must be de-aliased to render it useful. We present evidence of super-resolution in real-world surveillance imagery that is processed by hierarchical registration algorithms. These algorithms have been enhanced over those we previously reported. We discuss these enhancements and give examples of the use of the algorithm to gain information about the turbulence. To further reinforce the presence of super-resolution we present two methods for creating imagery warped by Kolmogorov turbulent phase screens, so that the results can be confirmed against true images.
Recently, we looked at applying our wide field-of-view tip-tilt turbulence visualization method to the visualization of the turbulent wake behind a jet aircraft. We have described successful results previously, in telescopically derived images of the moon’s surface and in horizontal surveillance imaging, in which small regions-of-interest (ROIs) within a turbulence-distorted image are registered to a prototype image. Unfortunately, when applied to a fast jet wake, the method did not produce useful results. This was found to be due to the fact that the background, which forms the reference image when the wake is absent, is heavily blurred when seen through the wake due to higher order wavefront distortions. Instead, the blurring made us wonder whether we could apply a Wiener filter between corresponding ROIs of the turbulence-distorted image and the reference image. This paper describes a new approach to registration that uses a Wiener filter within a scanned ROI to detect a local, space-varying point spread function or PSF. This new approach provides more robust shift information than our previously used cross correlation to describe the random wobble in the image sequence and also provides new information on the shape of the position-dependent blur PSF.
We describe current efforts in the development of a CMOS sensor based megapixel camera with acquisition frame rates in excess of 500 per second. The focus in this development is two fold: a low cost, easily adaptable sensor, and secondly, proposed integration with image processing hardware within the camera module. The aim of this sensor is to flexibly support applications of imaging in the presence of turbulence, for which we present such algorithms that might take advantage of processing hardware at the sensor head. Secondary applications of the processing hardware include image analysis and compression.
We postulate that under anisoplanatic imaging conditions involving imaging through turbulent media over a wide-area there exists the possibility of spatial frequency content that is normally lost outside the aperture of an imaging instrument under unperturbed viewing conditions, being aliased into the aperture. Simulation is presented that reinforces this premise. We apply restoration algorithms that were designed to correct non-uniform distortions, to a real image sequence to the effect of noticing the de-aliased super-frequency content. We claim this to be super-resolution, and that it is only possible under anisoplanatic imaging scenarios, where the point spread function of the image is position dependent as a result of the atmospheric turbulence.
The restoration of images formed through atmospheric turbulence is usually attempted through operating on a sequence of speckle images. The reason is that high spatial frequencies in each speckle image are effectively retained though reduced in magnitude and distorted in phase. However, speckle imaging requires that the light is quasi-monochromatic. An alternative possibility, discussed here, is to capture a sequence of images through a broadband filter, correct for any local warping due to position-dependent tip-tilt effects, and average over a large number of images. In this preliminary investigation, we simulate several optical transfer functions to compare the signal levels in each case. The investigation followed encouraging results that we obtained recently using a blind-deconvolution approach. The advantages of such a method are that narrow-band filtering is not required, simplifying the equipment and
allowing more photons for each short exposure image, while the method lends itself to restoration over fields of view wider than the isoplanatic patch without the need to mosaic. The preliminary conclusions are that, so long as the ratio of the telescope objective diameter, <i>D</i>, to Fried parameter, <i>r</i><sub>0</sub>, is less than about 5, the method may be a simple alternative to speckle imaging.
Over a wide field of view (e.g., 100 arcsec in optical astronomy) the point spread function due to atmospheric effects is found to be far form position invariant, and appears as a combination of local warping and local blurring. Recently, we discussed a method in which the first step in restoration is to register all points in every frame of a movie sequence to the corresponding points in a prototype image. After registration, each frame is de- warped and summed to form an average, motion-blur corrected result. Previously, we applied a hierarchical, windowed cross correlation process to obtain local x and y registration information, similar to common methods in stereo cartography. We discuss a new approach to image registration for this purpose. Suppose two images to be registered differ mainly in varying random, but spatially coherent warping (such as occurs as one effect of a slowly varying wavefront tip-tilt over a wide field of vies). Imagine that one image, the reference image, is represented by a solid surface corresponding to its intensity distribution. Imagine that the second image is also represented by a surface, but in the form of a flexible, rubber mold. If the two images are identical, then the mold fits the solid like a glove. If one image includes local warping relative to the other, then the mold or glove must be forced to fit though local distortions.
Wave-front distortion introduced as light passes through the atmosphere results in short exposure images which exhibit random warping amongst other effects. Our aim is to remove the warping to restore images to their true geometry, but this is not easy as the true geometry is generally not known. To do so, we need to understand the effect of atmospheric turbulence on short exposure images. The individual images are corrected and summed to produce a final image, which therefore has local motion blur removed and can approach the theoretical resolution limit of our optical/imaging system. An important by-product of the process is a sequence of detailed shift maps which provide, in effect, a visualization of the instantaneous turbulence field.
There has been a great deal of interest recently in pattern recognition and classification for remote sensing, using both classical statistics and artificial neural networks. An interesting neural network is Kohonen's seif-organising map (SOM), which is a clustering algorithm based on competitive learning. We have found that seif-organisation is a neural network paradigm that is especially suited to remote sensing applications, because of its power and accuracy, its conceptual simplicity and efficiency during learning. A disadvantage of the Kohonen SOM is that there is no inherent partitioning. We have investigated a natural extension of the SOM to multiple seif-organising maps, which we call MSOM, as a means of providing a framework for various remote sensing classification requirements. These include both supervised and unsupervised classification, high dimensional data analysis, multisource data fusion, spatial analysis and combined spatial and temporal classification.