The Panoramic Area Surveillance System (PASS) provides a unique imaging and processing capability for a wide range
of security and situational awareness applications. PASS comprises a network of multi-modal cameras and its
operational performance is derived from a range of extensive image and data processing functions implemented as realtime
software on commercially available hardware. The development of PASS has offered a number of design
challenges, including the balance between implementation constraints and system performance. Within this paper, the
PASS system and its development challenges are described and its operation is illustrated through a range of application
Conventional air-to-ground target acquisition processes treat the image stream in isolation from external data sources.
This ignores information that may be available through modern mission management systems which could be fused into
the detection process in order to provide enhanced performance. By way of an example relating to target detection, this
paper explores the use of a-priori knowledge and other sensor information in an adaptive architecture with the aim of
enhancing performance in decision making. The approach taken here is to use knowledge of target size, terrain elevation,
sensor geometry, solar geometry and atmospheric conditions to characterise the expected spatial and radiometric
characteristics of a target in terms of probability density functions. An important consideration in the construction of the
target probability density functions are the known errors in the a-priori knowledge. Potential targets are identified in the
imagery and their spatial and expected radiometric characteristics are used to compute the target likelihood. The adaptive
architecture is evaluated alongside a conventional non-adaptive algorithm using synthetic imagery representative of an
air-to-ground target acquisition scenario. Lastly, future enhancements to the adaptive scheme are discussed as well as
strategies for managing poor quality or absent a-priori information.
The use of multiple, high sensitivity sensors can be usefully exploited within military airborne enhanced vision systems
(EVS) to provide enhanced situational awareness. To realise such benefits, the imagery from the discrete sensors must be
accurately combined and enhanced prior to image presentation to the aircrew. Furthermore, great care must be taken to
not introduce artefacts or false information through the image processing routines. This paper outlines developments
made to a specific system that uses three collocated low light level cameras. As well as seamlessly merging the
individual images, sophisticated processing techniques are used to enhance image quality as well as to remove optical
and sensor artefacts such as vignetting and CCD charge smear. The techniques have been designed and tested to be
robust across a wide range of scenarios and lighting conditions, and the results presented here highlight the increased
performance of the new algorithms over standard EVS image processing techniques.
There is an increasing emphasis on the intelligent use of multiple sensor assets within military applications which is
driven by a number of factors. Firstly, the deployment of multiple, co-operative sensors can provide a much greater
situational awareness which is a key factor in military decision making at both strategic and tactical levels. Secondly,
through careful and timely asset management, military tempo and effectiveness can be maintained and even enhanced
such that the mission objectives are optimally prosecuted. Thirdly, intrinsic limitations of individual sensors and their
processing demands can be reduced or even eliminated. From a mission perspective, this renders the constraints and
frailties of the associated with the sensor network transparent to the military end users. Underpinning all of these factors
is the need to adaptively control and manipulate the various sensor search vectors in both space and time. Such a design
and operational capability is provided through Cerberus, an advanced design tool developed by Waterfall Solutions Ltd.
Within this paper, investigations into a range of different military applications using the Cerberus design environment
are reported and assessed in terms of the associated military objectives. These applications include the use of both
manned and uninhabited air vehicles as well as land and sea based sensor platforms. The use and benefits of available a
priori knowledge such as digital terrain data and mission intelligence can also be exploited within the Cerberus
environment to great military advantage.
The degrading effect of the atmosphere on hyperspectral imagery has long been recognised as a major issue in applying
techniques such as spectrally-matched filters to hyperspectral data. There are a number of algorithms available in the
literature for the correction of hyperspectral data. However most of these approaches rely either on identifying objects
within a scene (e.g. water whose spectral characteristics are known) or by measuring the relative effects of certain
absorption features and using this to construct a model of the atmosphere which can then be used to correct the image. In
the work presented here, we propose an alternative approach which makes use of the fact that the effective number of
degrees of freedom in the atmosphere (transmission, path radiance and downwelling radiance with respect to
wavelength) is often substantially less than the number of degrees of freedom in the spectra of interest. This allows the
definition of a fixed set of invariant features (which may be linear or non-linear) from which reflectance spectra can be
approximately reconstructed irrespective of the particular atmosphere. The technique is demonstrated on a range of data
across the visible to near infra-red, mid-wave and long-wave infra-red regions, where its performance is quantified.
Many image fusion systems involving passive sensors require the accurate registration of the sensor data prior to
performing fusion. Since depth information is not readily available in such systems, all registration algorithms are
intrinsically approximations based upon various assumption about the depth field. Although often overlooked, many
registration algorithms can break down in certain situations and this may adversely affect the image fusion performance.
In this paper, we discuss a framework for quantifying the accuracy and robustness of image registration algorithms
which allows a more precise understanding of their shortcomings. In addition, some novel algorithms have been
investigated that overcome some of these limitations. A second aspect of this work has considered the treatment of
images from multiple sensors whose angular and spatial separation is large and where conventional registration
algorithms break down (typically greater than a few degrees of separation). A range of novel approaches is reported
which exploit the use of parallax to estimate depth information and reconstruct a geometrical model of the scene. The
imagery can then be combined with this geometrical model to render a variety of useful representations of the data.
These techniques (which we term Volume Registration) show great promise as a means of gathering and presenting 3D
and 4D scene information for both military and civilian applications.
Physical growth processes give rise to a number of hyperspectral vegetation background clutter properties which degrade
the ability to detect targets in such backgrounds. In order to gain insight into this complex problem a novel three-fold
method is proposed: the first appeals to growth processes to produce generative models of the background clutter; the
second examines the phenomenology of these models and compares it to real data; and the third devises new anomaly
detectors to mitigate the effects of these background clutter properties. Studies of model cellular automata are reported
here. These models aim to replicate the local conditions necessary for successful growth of the vegetation species and as
a result produce spatial correlations that match real vegetation. Non-competitive and competitive growth models, in
particular, are studied and produce hyperspectral images through the use of Cameosim. In general, no degrading effects
of using an enhanced spectral library were observed suggesting that the dominant factor in reducing anomaly detectors
effectiveness is the spatial inhomogeneity of vegetation abundances. In addition, evidence for several important
properties of the hyperspectral background is also reported. These support the conclusion that vegetation background
clutter distributions are non-Gaussian. Insight gleaned from these studies has been used to develop many new improved
anomaly detectors and their results are also reported and bench-marked against existing algorithms.
The range and scope of electro-optical (EO) sensor systems within security and surveillance (S&S) is growing, and this places a corresponding demand on the image processing functionality required to meet end users' requirements. Increasingly, these requirements include the ability to monitor wide areas with multiple affordable cameras, and for good quality imagery to be available 24-hours a day. This paper presents the results from some real-time systems which offer this capability, and focuses on a number of the image processing techniques used to deliver a high-quality, wide-angle, day/night capability. These include the production of a seamless image mosaic from multiple sensors, the removal of artefacts from scenes, enhancements to take account of changing environmental conditions, and a means of allowing the system to automatically focus on an area of interest highlighted by an operator. In addition, the cost of some high-performance S&S systems may be reduced by omitting physical calibration elements and performing sensor calibration using scene-derived data instead, and so a method for achieving this is also reported. The paper considers both the theoretical aspects of the algorithms presented and the issues involved in real-time implementation for S&S applications.
Proc. SPIE. 6736, Unmanned/Unattended Sensors and Sensor Networks IV
KEYWORDS: Target detection, Detection and tracking algorithms, Sensors, Computer simulations, Monte Carlo methods, Systems engineering, Algorithm development, Systems modeling, Unattended ground sensors, Data fusion
The nature of co-operating Uninhabited Vehicle (UV) systems is such that performance enhancements are likely to be a
result of greatly increased system complexity. Complexity emerges through the interaction of multiple autonomous UVs
reacting to their current surroundings. This complexity presents a fundamental challenge to the specification, design
and evaluation of such systems, and drives the need for new approaches to the systems engineering. For applications
involving multiple autonomous UVs, research into collective and emergent behaviour offers potential benefits in terms
of improved system performance and the utilisation of individual UVs with lower processing complexity.
This paper reports on the development of a new simulation framework that addresses the systems engineering issues and
allows novel algorithms to be created and assessed. Examples are given of how the framework has been used to develop
and assess the performance of individual and multiple UVs, as well as unattended ground sensors. Furthermore, a
variety of novel algorithms developed using the framework are described and example results are provided. These
include co-operative UV missions requiring improved detection performance and the improved management of
unattended ground sensors to minimise power usage.
In this paper an end-to-end hyperspectral imaging system model is described which has the ability to predict the
performance of hyperspectral imaging sensors in the visible through to the short-wave infrared regime for sub-pixel
targets. The model represents all aspects of the system including the target signature and background, the atmosphere,
the optical and electronic properties of the imaging spectrometer, as well as details of the processing algorithms
employed. It provides an efficient means of Monte-Carlo modelling for sensitivity analysis of model parameters over a
wide range. It is also capable of representing certain types of
non-Gaussian hyperspectral clutter arising from
heterogeneous backgrounds. The capabilities of the model are demonstrated in this paper by considering Uninhabited
Airborne Vehicle scenarios and comparing both multispectral and hyperspectral sensors. Both anomaly detection and
spectral matched-filter algorithms are characterised in terms of Receiver Operating Characteristic curves. Finally, some
results from a preliminary validation exercise are presented.
There are many reconnaissance tasks which involve an image analyst viewing data from hyperspectral imaging systems and attempting to interpret it. Hyperspectral image data is intrinsically hard to understand, even when armed with mathematical knowledge and a range of current processing algorithms. This research is motivated by the search for new ways to convey information about the spectral content of imagery to people. In order to develop and assess the novel algorithms proposed, we have developed a tool for transforming different aspects of spectral imagery into sounds that an analyst can hear. Trials have been conducted which show that the use of these sonic mappings can assist a user in tasks such as rejecting false alarms generated by automatic detection algorithms. This paper describes some of the techniques used and reports on the results of user trials.
The SeaWolf Mid-Life Update (SWMLU) programme is a major upgrade to the UK Royal Navy's principal point defence weapon system. The update includes the addition of an Electro-Optic (EO) sensor to upgraded 'I' and 'K' band radars. The update presents a significant engineering challenge both in terms of hardware integration and software processing. The processing of sensor data into a coherent fused picture is a key element of the overall system design, and is critical to achieving the required system performance. Further to the fusion of object locations, derived object properties from both the spatial and temporal domains are also incorporated to create a highly detailed picture.
Core functionality of the data fusion process is the association of objects between sensors and the labelling of objects into targets and own missiles. The data association results have a direct influence on overall system performance and labelling accuracy of objects is crucial to satisfy the system performance requirements.
This paper discusses the data association and object labelling process followed in the SWMLU system and highlights sources of error and confusion for the EO sensor case. The effects of incorrect data associations are presented at the system-level. A number of software test environments for the EO sensor subsystem are introduced and analysed with a focus on data association.
Detection of anomalies in hyperspectral clutter is an important task in military surveillance. Most algorithms for unsupervised anomaly detection make either explicit or implicit assumptions about hyperspectral clutter statistics: for instance that the abundance is either normally distributed or elliptically contoured. In this paper we investigate the validity of such claims. We show that while non-elliptical contouring is not necessarily a barrier to anomaly detection, it may be possible to do better. In this paper we show how various generative models which replicate the competitive behaviour of vegetation at a mathematically tractable level lead to hyperspectral clutter statistics which do not have Elliptically Contoured (EC) distributions. We develop a statistical test and a method for visualizing the degree of elliptical contouring of real data. Having observed that in common with the generative models much real data fails to be elliptically contoured, we develop a new method for anomaly detection that has good performance on non-EC data.
It is now established that hyperspectral images of many natural backgrounds have statistics with fat-tails. In spite of this, many of the algorithms that are used to process them appeal to the multivariate Gaussian model. In this paper we consider biologically motivated generative models that might explain observed mixtures of vegetation in natural backgrounds. The degree to which these models match the observed fat-tailed distributions is investigated. Having shown how fat-tailed statistics arise naturally from the generative process, the models are put to work in new anomaly detection and un-mixing algorithms. The performance of these algorithms is compared with more traditional approaches.
This paper explores three related themes: the statistical nature of hyperspectral background clutter; why should it be like this; and how to exploit it in algorithms. We begin by reviewing the evidence for the non-Gaussian and in particular fat-tailed nature of hyperspectral background distributions. Following this we develop a simple statistical model that gives some insight into why the observed fat tails occur. We demonstrate that this model fits the background data for some hyperspectral data sets. Finally we make use of the model to develop hyperspectral detection algorithms and compare them to traditional algorithms on some real world data sets.
Traditional missile warning systems (MWSs) have tended to use the ultra-violet waveband, where the ambient intensity levels tend to be low and the resultant false alarm rate is comparatively small. The development of modern infrared imagers has generated interest in the use of infrared imagers in MWSs. Infrared cameras can detect the heat signatures of missile plumes, which peak in the mid-wave (3-5 micron) infrared band, but they can also contain appreciable levels of noise: including intermittent defects that are of the same size as the potential targets. Typically, both missiles and defects will only occupy a few pixels in each image. This paper reviews a project concerned with developing an MWS algorithm toolbox for use in evaluating infrared MWSs. In particular, the paper discusses some of the main problems associated with detecting and tracking missiles in infrared imagery from a moving platform in the presence of localised image noise.
The Seawolf Mid-Life Update (SWMLU) programme is a major upgrade of the UK Royal Navy's principal point defence weapon system. The addition of an Electro-Optic sensor to the pre-existing 'I' and 'K' band radars presents a significant engineering challenge. The processing of the data from the 3 sensors into a fused picture such that a coherent view of which objects represent targets and own missiles is a key element of the overall system design and is critical to achieving the required system performance. Without this coherent view of the detected objects incorrect guidance commands will be issued to the Seawolf missiles resulting in a failure to successfully intercept the target. This paper reviews the sensor data association problem as it relates to the SWMLU system and outlines identified solution strategies. The SWMLU sensors provide complementary data that can be exploited through data association to maximise tracking accuracy as well as maintaining performance under sensor lost-lock conditions. The sensor data association approach utilises radar and EO properties from spatial and temporal domains. These characteristics are discussed in terms of their contribution to the SWMLU data association problem where it will be shown that the use of object attributes from the EO sensor and their behaviour over time is a critical performance factor.
Simple statistical models for clutter are desirable for parametric modelling of sensors and the development of constant false alarm rate detection processing. In the case of radar sensors and sea clutter there are a few widely known and accepted 'standard' models that can be employed. For passive infra-red sensors there are fewer models and no such widely accepted model applicable to sea clutter. In this paper a statistical model for the behaviour of sea clutter in the long-wave infra-red is presented. The model is based upon many of the same assumptions that lead, in the case of radar, to the well-known and widely used K-distribution model. It is compared with real long-wave infra-red sea clutter data gathered in trials from a variety of locations.
This paper describes the use of an image query database (IQ-DB) tool as a means of implementing a validation strategy for synthetic long-wave infrared images of sea clutter. Specifically it was required to determine the validity of the synthetic imagery for use in developing and testing automatic target detection algorithms. The strategy adopted for exploiting synthetic imagery is outlined and the key issues of validation and acceptance are discussed in detail. A wide range of image metrics has been developed to achieve pre-defined validation criteria. A number of these metrics, which include post processing algorithms, are presented. Furthermore, the IQ-DB provides a robust mechanism for configuration management and control of the large volume of data used. The implementation of the IQ-DB is reviewed in terms of its cardinal point specification and its central role in synthetic imagery validation and EOSS progressive acceptance.
Proc. SPIE. 5093, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX
KEYWORDS: Target detection, Hyperspectral imaging, Statistical analysis, Detection and tracking algorithms, Data modeling, Sensors, Image processing, Monte Carlo methods, Statistical modeling, Data analysis
Anomaly detection in hyperspectral imagery is a potentially powerful approach for detecting objects of military interest because it does not require atmospheric compensation or target signature libraries. A number of methods have been proposed in the literature, most of these require a parametric model of the background probability distribution to be estimated from the data. There are two potential difficulties with this. First a parametric model must be postulated which is capable of describing the background statistics to an adequate approximation. Most work has made use of the multivariate normal distribution. Secondly the parameters must be estimated sufficiently accurately - this can be problematic for the covariance matrix of high dimensional hyperspectral data. In this paper we present an alternative view and investigate the capabilities of anomaly detection algorithms starting from a minimal set of assumptions. In particular we only require the background pixels to be samples from an independent and identically distributed (iid) process, but do not require the construction of a model for this distribution. We investigate a number of simple measures of the 'strangeness' of a given pixel spectra with respect to the observed background. An algorithm is proposed for detecting anomalies in a self-consistent way. The effectiveness of the algorithms is compared with a well-known anomaly detection algorithm from the literature on real hyperspectral data sets.
The addition of an advanced EO subsystem to an in-service tracker system is reviewed in terms of the sensor modelling and proving activities. For the latter, emphasis is placed on model verification and validation techniques that will lead to a validation case which will then be used to gain equipment acceptance with the UK Royal Navy. The approach to modelling encompasses parametric and image-flow models. The relationship between these different representations is described together with their interaction with the EO equipment and the project development lifecycle. The algorithms generated for the image flow model will be used as the basis for the EO subsystem detection, tracking, and data association software. Issues arising from model validation activities are addressed in detail and include the validation approach, appropriate metrics, coverage of the operational envelope and the use of synthetic imagery to augment trials data.
In this paper we consider the tracking of small distant objects using Radar and Electro-Optical (EO) sensors. In particular we address the problem of data association after coalescence - this happens when two objects become sufficiently close (in angular terms) that they can no longer be resolved by the EO sensor. Some moments later they de-coalesce and the resulting detections must be associated with the existing tracks in the EO sensor. Traditionally this would be solved by making use of the velocity vectors of the objects prior to coalescence. This approach can work well for crossing objects, but when the objects are largely moving in a direction radial to the sensor it becomes problematic. Here we investigate the use of data fusion to combine Radar range with a brightness measure derived from an EO sensor to enhance the accuracy of data association. We present a number of results on the performance of this approach taking into account target motion, atmospheric conditions and sensor noise.
Proc. SPIE. 4731, Sensor Fusion: Architectures, Algorithms, and Applications VI
KEYWORDS: Target detection, Detection and tracking algorithms, Sensors, Matrices, Error analysis, Monte Carlo methods, Target recognition, Filtering (signal processing), Global Positioning System, Data fusion
This paper examines the requirement for accurate estimates of the statistical correlations between measurements in a distributed air-to-ground targeting system. The study uses results from a distributed multi-platform targeting simulation based on a level-1 data fusion system to assess the extent to which correlated measurements can degrade system performance, and the degree to which these effects need to be included to obtain a required level of accuracy. The data fusion environment described in the paper incorporates a range of target tracking and data association algorithms, including several variants of the standard Kalman filter, probabilistic association techniques and Reid's multiple hypothesis tracker. A variety of decentralized architectures are supported, allowing comparison with the performance of equivalent centralized systems. In the analysis, consideration is given to constraints on the computational complexities of the fusion system, and the availability of estimates of the measurement correlations and platform-dependent biases. Particular emphasis is placed on the localisation accuracy achieved by different algorithmic approaches and the robustness of the system to errors in the estimated covariance matrices.
The accuracy with which an object can be localized is key to the success of a targeting system. Localization is generally achieved by a single sensor, most notably Synthetic Aperture Radar (SAR) or an Infra-Red (IR) device, supported by an Inertial Navigation System (INS) and/or a Global Positioning System (GPS). Future targeting systems are expected to contain (or to have access to data from) multiple sensors, thus enabling data fusion to be conducted and an improved estimate of target location to be deduced. This paper presents a sensor fusion testbed for fusing data from multiple sensors. Initially, a simple, optimal static fusion scheme is illustrated, then focusing on air-to-ground targeting applications example results are given for single and multiple platform sorties involving a variety of sensor combinations. Consideration is given to the most appropriate sensor mix across single and multiple aircraft, as well as architectural implementation issues and effects. The sensitivity of the fusion method to key parameters is then discussed before some conclusions are drawn about the behavior, implications and benefit of this approach to improving targeting.
The accuracy of aircraft/weapon navigation systems has improved dramatically since the introduction of global positioning systems and terrain-referenced navigation systems into integrated navigation suites. Future improvements, in terms of reliability and accuracy, could arise from the inclusion of navigation systems based on the correlation of known ground features with imagery from a visual band or infrared sensor, often called scene matching and area correlation or scene-referenced navigation. This paper considers the use of multi-platform fusion techniques to improve on the performance of individual scene-referenced navigation systems. Consideration is also given to the potential benefits of multi-platform fusion for scene-referenced object localization algorithms that could be used in association with infrared targeting aids.
This paper describes a signal processing technique that has been developed for a vibration-sensing laser radar. The sensor has successfully acquired data from moving objects. Vibrations on the surface of the object can be induced by internal machinery and, when stationary, would normally be seen as modulations about a fixed carrier frequency. Thus a straightforward demodulation technique can be used to identify any important vibration characteristics. However, for a moving object, the laser transmit frequency is Doppler-shifted upon reflection by an amount proportional to the object's velocity resolved along the line-of-sight of the sensor. Therefore, the carrier frequency of the return signal is not known and the range of frequencies that it could occupy is large in comparison to the bandwidth of the modulations. The algorithm locates the carrier frequency within some large range (typically tens of Megahertz) and generates a synthetic mixing signal that allows the carrier to be down-shifted to baseband. Tracking is performed using a series of Kalman filters on all likely signal candidates and the synthetic mixing signal is made up of the set that scores highly in terms of carrier-to- noise ratio, for example. Following the mix, the resultant signal is decimated so that modulations corresponding to the surface vibration can be studied. This paper illustrates the signal tracking technique applied to a number of real data sets and discusses the benefits of using a predictive method.
Emerging Hyper-Spectral imaging technology allows the acquisition of data 'cubes' which simultaneously have high- resolution spatial and spectral components. There is a wealth of information in this data and effective techniques for extracting and processing this information are vital. Previous work by ERIM on man-made object detection has demonstrated that there is a huge amount of discriminatory information in hyperspectral images. This work used the hypothesis that the spectral characteristics of natural backgrounds can be described by a multivariate Gaussian model. The Mahalanobis distance (derived from the covariance matrix) between the background and other objects in the spectral data is the key discriminant. Other work (by DERA and Pilkington Optronics Ltd) has confirmed these findings, but indicates that in order to obtain the lowest possible false alarm probability, a way of including higher order statistics is necessary. There are many ways in which this could be done ranging from neural networks to classical density estimation approaches. In this paper we report on a new method for extending the Gaussian approach to more complex spectral signatures. By using ideas from the theory of Support Vector Machines we are able to map the spectral data into a higher dimensional space. The co- ordinates of this space are derived from all possible multiplicative combinations of the original spectral line intensities, up to a given order d -- which is the main parameter of the method. The data in this higher dimensional space are then analyzed using a multivariate Gaussian approach. Thus when d equals 1 we recover the ERIM model -- in this case the mapping is the identity. In order for such an approach to be at all tractable we must solve the 'combinatorial explosion' problem implicit in this mapping for large numbers of spectral lines in the signature data. In order to do this we note that in the final analysis of this approach it is only the inner (dot) products between vectors in the higher dimensional space that need to be computed. This can be done by efficient computations in the original data space. Thus the computational complexity of the problem is determined by the amount of data -- rather than the dimensionality of the mapping. The novel combination of non- linear mapping and high dimensional multivariate Gaussian analysis, only possible by using techniques from SVM theory, allows the practical application to hyperspectral imagery. We note that this approach also generates the non-linear Principal Components of the data, which have applications in their own right. In this paper we give a mathematical derivation of the method from first principles. The method is illustrated on a synthetic data set where complete control over the true statistics is possible. Results on this data show that the method is very powerful. It naturally extends the Gaussian approach to a variety of more complex probability distributions, including multi-modal and other manifestly non- Gaussian examples. Having shown the potential of this approach it is then applied to real hyperspectral trials data. The relative improvement in performance over the Gaussian approach is demonstrated for the real data.
An extensive set of measurements of scintillation over a 17.55 km path have been made using point sources at wavelengths of 633 nm and 10.6 micrometers , and using an extended thermal source at 3 - 5 micrometers and 8 - 12 micrometers . The basic data consists of normalized variances, probability histograms and normalized autocorrelation functions of intensity. The main aim was to product a set of data that might be used as inputs to models for scintillation. The measurements, as expected, showed a very large range of observed fluctuations, with a highest recorded normalized variance at 633 nm of approximately equals 34 and an average value of 4.8 (averaged over 130 data sets), with a standard deviation of 4.1. The probability histograms have been fitted using log- normal, exponential, log-normally modulated exponential and K distributions. As a general rule, the log-normal model gives a good fit in a large number of cases. Power spectra and correlations functions were measured and show the expected trends with wavelength, with average correlation times (defined in the text) in the range 10 msec (visible) to 68 msec (CO<SUB>2</SUB>).
The SPIRIT system is a spectrally agile IR imaging airborne camera, with the capability to select any of the multiple filters on a frame by frame basis. The implemented solution employs advanced, but proven, technology to meet the objectives, and achieved good spatial and thermal performance in all modes. Sophisticated electronic design has results in a flexible unit, which can respond to the changing requirements of the user. Initial SPIRIT flight trials were undertaken in summer 1998 with more scheduled to continue through 1999. The sensor was installed on to DERA's TIARA research platform, a modified Tornado F2. The flight trials to date have been conducted over a variety of scenarios, collecting spectral data in up to 12 bands, of other aircraft, tanks, and fixed targets. Further ground- based trials, with the sensor mounted on a pan and tilt tracking platform, have been performed on characterized targets and against further air targets. Data from these initial trials are currently being processed to assess whether sufficient spectral information is available to discriminate between target types at militarily significant ranges. Sample hyperspectral imagery form SPIRIT and some results are presented.
Non co-operative target identification using laser vibrometry is typically based upon characterization of the frequency spectra obtained after demodulating the vibrometer's output signal. The characterization uses information gleaned from certain identifiable features, such as tonals, which can be extracted from the vibration spectra. The success of this classification is dependent upon the performance of the demodulation scheme adopted. This paper investigates a number of different digital demodulation strategies that can be used on down-shifted and digitized vibrometer output in which the gross Doppler term has been removed. This paper presents an assessment of the likely impact of demodulation schemes upon several critical classification cues, for example signal-to-noise ratio, frequency and bandwidth. These cues help to parameterize the vibration spectrum and may in themselves be used to classify targets. Only digital schemes are considered here, in contrast to conventional vibrometer technology which uses analogue demodulation schemes. The investigations take the form of a general review, followed by a detailed description of each demodulation method. Each method is applied to a representative modulated signal, and its performance assessed qualitatively. Of key importance in this analysis are the different time-frequency representations (TFRs) of the digitized vibrometer signal, in addition to phase- differencing methods, which are used to derive the instantaneous frequency. TFRs which have been examined are the short-time Fourier Transform, Wigner and Choi-Williams distributions.