Distributed aperture sensor (DAS) systems can enhance the situational awareness of operators in both manned and
unmanned platforms. In such a system, images from multiple sensors must be registered and fused into a seamless
panoramic mosaic in real time, whilst being displayed with very low latency to an operator. This paper describes an
algorithm for solving the multiple-image alignment problem and an architecture that leverages the power of consumer
graphics processing units (GPU) to provide a live panoramic mosaic display. We also describe other developments
aimed at integrating high resolution imagery from an independently steerable fused TV/IR sensor into the mosaic,
panorama stabilisation and automatic target detection.
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the operator and hence there is a need for image fusion. Previous papers by these authors have described the techniques available for image alignment and image fusion. This paper discusses the implementation of a real-time image alignment and fusion system in a police helicopter. The need for image fusion and the requirements of fusion systems to pre-align images is reviewed. The techniques implemented for image alignment and fusion will then be discussed. The hardware installed in the helicopter and the system architecture will be described as well as the particular difficulties with installing a 'black box' image fusion system with existing sensors. The methods necessary for field of view matching and image alignment will be described. The paper will conclude with an illustration of the performance of the image fusion system as well as some feedback from the police operators who use the equipment.
Distributed Aperture Sensor (DAS) systems employ multiple sensors to obtain high resolution, wide angle video coverage of their local environment in order to enhance the situational awareness of manned and unmanned platforms. The images from multiple sensors must be presented to an operator in an intuitive manner and with minimal latency if they are to be rapidly interpreted and acted upon. This paper describes a display processor that generates a real-time panoramic video mosaic from multiple image streams and the algorithms for calibrating the image alignments. The architecture leverages the power of commercial graphics processing units (GPUs) in order to accelerate the image warping and display rendering, providing the operator with real-time virtual environment viewed through a virtual camera. The possibility of integrating high resolution imagery from a zoom sensor on a pan-tilt mount directly into the mosaic, introducing a 'foveal' region of high fidelity into the panoramic image is also possible.
Multiple camera systems have been considered for a number of applications, including infrared (IR) missile detection in modern fast jet aircraft, and soldier-aiding data fusion systems. This paper details experimental work undertaken to test image-processing and harmonisation techniques that were developed to align multiple camera systems. This paper considers systems where the camera properties are significantly different and the camera fields of view do not necessarily overlap. This is in contrast to stereo calibration alignment techniques that rely on similar resolution, fields of view and overlapping imagery. Testing has involved the use of two visible-band cameras and attempts to harmonise a narrow field of view camera with a wide field of view camera. In this paper, consideration has also been given to the applicability of the algorithms to both visual-band and IR based camera systems, the use of supplementary motion information from inertial measurement systems and consequent system limitations.
The ability to automatically detect and track moving targets whilst stabilizing and enhancing the incoming video would be highly beneficial in a range of aerial reconnaissance scenarios. We have implemented a number of image-processing algorithms on our ADEPT hardware to perform these and other useful tasks in real-time. Much of this functionality is currently being migrated onto a smaller PC104 form-factor implementation that would be ideal for UAV applications. In this paper, we show results from both software and hardware implementations of our current suite of algorithms using synthetic and real airborne video. We then investigate an image processing architecture that integrates mosaic formation, stabilisation and enhancement functionality using micro-mosaics, an architecture which yields benefits for all the processes.
Many modern imaging and surveillance systems contain more than one sensor. For example, most modern airborne imaging pods contain at least visible and infrared sensors. Often these systems have a single display that is only capable of showing data from either camera, and thereby fail to exploit the benefit of having simultaneous multi-spectral data available to the user. It can be advantageous to capture all spectral features within each image and to display a fused result rather than single band imagery. This paper discusses the key processes necessary for an image fusion system and then describes how they were implemented in a real-time, rugged hardware system. The problems of temporal and spatial misalignment of the sensors and the process of electronic image warping must be solved before the image data is fused. The techniques used to align the two inputs to the fusion system are described and a summary is given of our research into automatic alignment techniques. The benefits of different image fusion schemes are discussed and those that were implemented are described. The paper concludes with a summary of the real-time implementation of image alignment and image fusion by Octec and Waterfall Solutions and the problems that have been encountered and overcome.
Many modern imaging and surveillance systems contain more than one sensor. For example, most modern airborne imaging pods contain at least visible and infrared sensors. Often these systems have a single display that is only capable of showing data from either camera, and thereby fail to exploit the benefit of having simultaneous multi-spectral data available to the user. It can be advantageous to capture all spectral features within each image and to display a fused result rather than single band imagery. This paper discusses the key processes necessary for an image fusion system and then describes how they were implemented in a real-time, rugged hardware system. The problems of temporal and spatial misalignment of the sensors and the process of electronic image warping must be solved before the image data is fused. The techniques used to align the two inputs to the fusion system are described and a summary is given of our research into automatic alignment techniques. The benefits of different image fusion schemes are discussed and those that were implemented are described. The paper concludes with a summary of the real-time implementation of image alignment and image fusion by Octec and Waterfall Solutions and the problems that have been encountered and overcome.
A number of algorithms including moving target detection, video stabilisation and image enhancement have been described in the literature as useful in aerial reconnaissance scenarios. These algorithms are often described in isolation and require a base station for off-line processing. We consider the problem of designing a single image processing architecture capable of supporting these and other useful tasks in an embedded real-time system such as a semi-autonomous UAV.
This paper describes our current algorithm suite and a versatile new architecture in development based on the formation of mosaic images. We show how these mosaics can be generated in real-time through fast image registration techniques and then exploited to accomplish typical aerial reconnaissance tasks. We also illustrate how they can be used to compress the video sequence.
We show results from synthetic and real video using both software and hardware implementations. Our embedded hardware solution, its current algorithm suite and future developments are discussed.
Most modern fast jet aircraft have at least one infrared camera, a Forward Looking Infra Red (FLIR) imager. Future aircraft are likely to have several infrared cameras, and systems are already being considered that use multiple imagers in a distributed architecture. Such systems could provide the functionality of several existing systems: a pilot flying aid, a modern laser designator/targeting system and a missile approach warning system. This paper considers image-processing techniques that could be used in a distributed aperture vision system, concentrating on the harmonisation of high resolution, narrow field of view cameras with low-resolution cameras with wide fields of view. In this paper, consideration is given to the accuracy of the registration and harmonisation processes in situations where the complexity of the scene varies over different terrain types, and possible use of supplementary motion information from inertial measurement systems.
Many airborne platforms have high performance electro optical sensor suites mounted on them. Such sensor systems can provide vital, real time reconnaissance information to users on the platform or on the ground. However such sensor systems require control and output large amounts of data of which the user may require only a relatively small amount for his decision processes. This paper describes a payload management system, designed to automatically control an airborne sensor suite to improve the 'quality' of the data provided to the user and other systems on the airborne platform. The system uses real time image-processing algorithms to provide low-level functions e.g. closed loop target tracking, image stabilization, automatic focus control and super-resolution. The system combines such real time outputs and incorporates contextual data inputs to provide higher-level surveillance functions such as recognition and ranging of navigational waypoints for geo-location; registration of image patches for large area terrain imaging. The paper outlines the physical and processing architecture of the system and also gives an overview of the algorithms and capabilities of the system. The issues surrounding the integration into existing airborne platforms are discussed.
Many future infantry weapons will incorporate a laser range finder. There is often a requirement to measure the range of small targets at distances of over one kilometre, which means that the necessary pointing accuracy is in the order of milliradians. The weapon is being aimed by a soldier who could be cold, tired and suffering from combat stress. Trials have shown that the stability of a hand held weapon is unlikely to allow an accurate range measurement on small moving targets at long range, using a standard laser range finder. Octec have been investigating the use of a video tracker to control the firing or processing of a laser range finder in an attempt to significantly improve the probability of reporting the correct range. This paper presents the results of trials carried out to measure the drift or 'wobble' of a soldier's aim and goes on to demonstrate how the use of angular information provided by a tracker could help provide an accurate range from small moving targets. The equipment necessary to test a tracker controlled laser range finder in the field will be described as well as the results of simulations indicating an increase in the probability of a correct lase on small, moving targets.
Airborne electro-optic surveillance from a moving platform currently requires regular interaction from a trained operator. Even simple tasks such as fixating on a static point on the ground can demand constant adjustment of the camera orientation to compensate for platform motion. In order to free up operator time for other tasks such as navigation and communication with ground assets, an automatic gaze control system is needed. This paper describes such a system, based purely on tracking points within the video image. A number of scene points are automatically selected and their inter-frame motion tracked. The scene motion is then estimated using a model of a planar projective transform. For reliable and accurate camera pointing, the modeling of the scene motion must be robust to common problems such as scene point obscuration, objects moving independently within the scene and image noise. This paper details a COTS based system for automatic camera fixation and describes ways of preventing objects moving in the scene or poor motion estimates from corrupting the scene motion model.
Airport congestion is becoming a major problem, with many airports stretched to capacity. Monitoring of airport traffic is becoming of increased importance as airport operators try to maximize their efficiency whilst maintaining a high safety standard. This paper describes a fully automatic electro-optic tracking system, designed to track aircraft whilst on, or near, the runway. The system uses a single camera and several surveyed landmarks to predict the 3D location of the aircraft. Two modes of operation are available: take off and landing, with aircraft statistics recorded for each. Aircraft are tracked until they are clear of the runway, either airborne or having turned off onto a taxiway. Statistics and video imagery are recorded for each aircraft movement, detailing the time interval between landings or take offs, the time taken to clear the runway as well as for landing aircraft, details of approach speed, glide slope, point of touch-down and which exit taxiway was used. This information can be analyzed to monitor efficiency and to highlight violations in any safety regulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.