Distributed Aperture Sensor (DAS) systems employ multiple sensors to obtain high resolution, wide angle video coverage of their local environment in order to enhance the situational awareness of manned and unmanned platforms. The images from multiple sensors must be presented to an operator in an intuitive manner and with minimal latency if they are to be rapidly interpreted and acted upon. This paper describes a display processor that generates a real-time panoramic video mosaic from multiple image streams and the algorithms for calibrating the image alignments. The architecture leverages the power of commercial graphics processing units (GPUs) in order to accelerate the image warping and display rendering, providing the operator with real-time virtual environment viewed through a virtual camera. The possibility of integrating high resolution imagery from a zoom sensor on a pan-tilt mount directly into the mosaic, introducing a 'foveal' region of high fidelity into the panoramic image is also possible.
Many Command-to-Line-of-Sight missile systems use ground-based electro-optic sensors to track their targets. Both optical and Infra-Red systems can be affected by launch effects, which can include camera shake on launch and target obscuration due to the missile exhaust plume. Further effects can be encountered during flight including aimpoint disturbance, launch debris and countermeasures.
An automatic video tracking system (AVT) is required to cope with all of these distractions, whilst maintaining track on the primary target. If track is broken during the engagement, the AVT needs to employ a strategy that will enable reacquisition of the primary target with the minimum of delay. This task can be significantly more complicated in a cluttered scene.
This paper details such a reacquisition algorithm, the primary purpose of which is to correctly identify the primary target whilst reducing the reacquisition timeline. Results are presented against synthetic imagery and actual missile firings.
The ability to automatically detect and track moving targets whilst stabilizing and enhancing the incoming video would be highly beneficial in a range of aerial reconnaissance scenarios. We have implemented a number of image-processing algorithms on our ADEPT hardware to perform these and other useful tasks in real-time. Much of this functionality is currently being migrated onto a smaller PC104 form-factor implementation that would be ideal for UAV applications. In this paper, we show results from both software and hardware implementations of our current suite of algorithms using synthetic and real airborne video. We then investigate an image processing architecture that integrates mosaic formation, stabilisation and enhancement functionality using micro-mosaics, an architecture which yields benefits for all the processes.
A number of algorithms including moving target detection, video stabilisation and image enhancement have been described in the literature as useful in aerial reconnaissance scenarios. These algorithms are often described in isolation and require a base station for off-line processing. We consider the problem of designing a single image processing architecture capable of supporting these and other useful tasks in an embedded real-time system such as a semi-autonomous UAV.
This paper describes our current algorithm suite and a versatile new architecture in development based on the formation of mosaic images. We show how these mosaics can be generated in real-time through fast image registration techniques and then exploited to accomplish typical aerial reconnaissance tasks. We also illustrate how they can be used to compress the video sequence.
We show results from synthetic and real video using both software and hardware implementations. Our embedded hardware solution, its current algorithm suite and future developments are discussed.
Robust automatic detection and tracking of small targets in cluttered environments is becoming increasingly important; this is especially true in the surveillance of areas of high strategic importance. This paper describes an unattended electro-optical tracking system, designed to automatically detect and track moving targets in cluttered environments. Such a system has to have a low false alarm rate whilst maintaining a high probability of detection. Once a target has been detected a security alert can be issued and the security personnel are automatically shown the relevant images and a risk factor can be applied to each target. Often surveillance systems will be positioned such that there will be sources of false alarms within view. Rejection of these sources is critical, however rejection of genuine targets close to them should be avoided. Methods of rejecting clutter are investigated, these include rejection of known features <i>e.g.</i> vegetation, and rejection of targets that conform to expected patterns <i>e.g.</i> vehicles on a road. Further to this, target tracks are maintained in a 'local' Cartesian coordinate set allowing the possibility of pin-pointing the target and maintaining a track whilst scan capability.
Many airborne platforms have high performance electro optical sensor suites mounted on them. Such sensor systems can provide vital, real time reconnaissance information to users on the platform or on the ground. However such sensor systems require control and output large amounts of data of which the user may require only a relatively small amount for his decision processes. This paper describes a payload management system, designed to automatically control an airborne sensor suite to improve the 'quality' of the data provided to the user and other systems on the airborne platform. The system uses real time image-processing algorithms to provide low-level functions <i>e.g.</i> closed loop target tracking, image stabilization, automatic focus control and super-resolution. The system combines such real time outputs and incorporates contextual data inputs to provide higher-level surveillance functions such as recognition and ranging of navigational waypoints for geo-location; registration of image patches for large area terrain imaging. The paper outlines the physical and processing architecture of the system and also gives an overview of the algorithms and capabilities of the system. The issues surrounding the integration into existing airborne platforms are discussed.
Airborne electro-optic surveillance from a moving platform currently requires regular interaction from a trained operator. Even simple tasks such as fixating on a static point on the ground can demand constant adjustment of the camera orientation to compensate for platform motion. In order to free up operator time for other tasks such as navigation and communication with ground assets, an automatic gaze control system is needed. This paper describes such a system, based purely on tracking points within the video image. A number of scene points are automatically selected and their inter-frame motion tracked. The scene motion is then estimated using a model of a planar projective transform. For reliable and accurate camera pointing, the modeling of the scene motion must be robust to common problems such as scene point obscuration, objects moving independently within the scene and image noise. This paper details a COTS based system for automatic camera fixation and describes ways of preventing objects moving in the scene or poor motion estimates from corrupting the scene motion model.
Airport congestion is becoming a major problem, with many airports stretched to capacity. Monitoring of airport traffic is becoming of increased importance as airport operators try to maximize their efficiency whilst maintaining a high safety standard. This paper describes a fully automatic electro-optic tracking system, designed to track aircraft whilst on, or near, the runway. The system uses a single camera and several surveyed landmarks to predict the 3D location of the aircraft. Two modes of operation are available: take off and landing, with aircraft statistics recorded for each. Aircraft are tracked until they are clear of the runway, either airborne or having turned off onto a taxiway. Statistics and video imagery are recorded for each aircraft movement, detailing the time interval between landings or take offs, the time taken to clear the runway as well as for landing aircraft, details of approach speed, glide slope, point of touch-down and which exit taxiway was used. This information can be analyzed to monitor efficiency and to highlight violations in any safety regulations.