Image registration is one of the most important tasks in image processing and is frequently one of the most
computationally intensive. In cases where there is a high likelihood of finding the exact template in the search image,
correlation-based methods predominate. Presumably this is because the computational complexity of a correlation
operation can be reduced substantially by transforming the task into the frequency domain. Alternative methods such as
minimum Sum of Squared Differences (minSSD) are not so tractable and are normally disfavored.
This bias is justified when dealing with conventional computer processors since the operations must be conducted in an
essentially sequential manner however we demonstrate it is normally unjustified when the processing is undertaken on
customizable hardware such as FPGAs where tasks can be temporally and/or spatially parallelized. This is because the
gate-based logic of an FPGA is better suited to the tasks of minSSD i.e. signed-addition hardware can be very cheaply
implemented in FPGA fabric, and square operations are easily implemented via a look-up table. In contrast, correlationbased
methods require extensive use of multiplier hardware which cannot be so cheaply implemented in the device.
Even with modern DSP-oriented FPGAs which contain many "hard" multipliers we experience at least an order of
magnitude increase in the number of minSSD hardware modules we can implement compared to cross-correlation
modules. We demonstrate successful use and comparison of techniques within an FPGA for registration and correction
of turbulence degraded images.
Methods to correct for atmospheric degradation of imagery and improve the "seeing" of a telescope are well known in astronomy but, to date, have rarely been applied to more earthly matters such as surveillance. The intrinsically more complicated visual fields, the dominance of low-altitude distortion effects, the requirement to process large volumes of data in near real-time, the inability to pre-select ideal sites and the desirability of ruggedness and portability all combine to pose a significant challenge.
Field Programmable Gate Array (FPGA) technology has advanced to the point where modern devices contain hundreds of thousands of logic gates, multiple "hard" processors and multi-gigabit serial communication links. Such devices present an ideal platform to tackle the demands of surveillance image processing.
We report a rugged, lightweight system which allows multiple FPGA "modules" to be added together in order to quickly and easily reallocate computing resources. The devices communicate via 2.5Gbps serial links and process image data in a streaming fashion, reducing as much data as possible on-the-fly in order to present a minimised load to storage and/or communication devices.
To maximise the benefit of such a system we have devised an open protocol for FPGA-based image processing called "OpenStream". This allows image processing cores to be quickly and easily added into or removed from the data stream and harnesses the benefits of code-reuse and standardisation. It further allows image processing tasks to be easily partitioned across multiple, heterogeneous FPGA domains and permits a designer the flexibility to allocate cores to the most appropriate FPGA. OpenStream is the infrastructure to facilitate rapid, graphical, development of FPGA based image processing algorithms especially when they must be partitioned across multiple FPGAs. Ultimately it will provide a means to automatically allocate and connect resources across FPGA domains in a manner analogous to the way logic synthesis tools allocate and connect resources within an FPGA.
Surveillance imaging from long-range requires use of telescopic optics, and fast electro-optic sensors. The intervening air introduces distortion of the imagery and its spatial frequency content, and does so such that regions of the image suffer dissimilar distortion, visible in the first instance as a time varying geometrical warp, and then as region specific blurring or "speckle". The severity of this, and hence the reduction in size of regions exhibiting similar distortion, is a function of the field of view of the telescope, the height above ground of the imaging path, the range to the target, and climatic conditions.
Image processing algorithms must be run on the sequence of imagery to correct these distortions, on the assumption that exposure time has effectively "frozen" the turbulence. These are absent of knowledge of the actual scene under investigation. Successful algorithms do manage to correct the apparent warping, and in doing so they yield both information on the bulk turbulent medium, and allow for reconstruction of spatial frequency content of the scene that would have been lost by the capability of the optics had their been no turbulence. This is known as turbulence-induced super-resolution.
To confirm the success of algorithms in both correction and reconstruction of such super-resolution we have devised a field experiment where the truth image is known and which uses other methods to evaluate the turbulence for collaboration of the results. We report here a new algorithm, which has proved successful in satellite remote sensing, for restoring this imagery to quality beyond the diffraction limits set by the optics.
We report a test of the turbulence found in real-world, horizontal imaging under high magnification. The experiment
creates a double "star" on a test chart for use both with a SLODAR turbulence profiling instrument, and simultaneously
imaged using a very fast camera to determine traditional seeing parameters. Effects on a similarly located image are
investigated to determine the observed effects on the imagery as a function of turbulence location.