Proc. SPIE. 7698, Signal and Data Processing of Small Targets 2010
KEYWORDS: Sensors, Chemical analysis, Environmental sensing, Chemical detection, Detection and tracking algorithms, Signal processing, Chemical fiber sensors, Spectrometers, Pollution control, Biological research
Reliable detection of hazardous materials is a fundamental requirement of any national security program. Such
materials can take a wide range of forms including metals, radioisotopes, volatile organic compounds, and
biological contaminants. In particular, detection of hazardous materials in highly challenging conditions - such
as in cluttered ambient environments, where complex collections of analytes are present, and with sensors lacking
specificity for the analytes of interest - is an important part of a robust security infrastructure. Sophisticated
single sensor systems provide good specificity for a limited set of analytes but often have cumbersome hardware
and environmental requirements. On the other hand, simple, broadly responsive sensors are easily fabricated
and efficiently deployed, but such sensors individually have neither the specificity nor the selectivity to address
analyte differentiation in challenging environments. However, arrays of broadly responsive sensors can provide
much of the sensitivity and selectivity of sophisticated sensors but without the substantial hardware overhead.
Unfortunately, arrays of simple sensors are not without their challenges - the selectivity of such arrays can only
be realized if the data is first distilled using highly advanced signal processing algorithms. In this paper we will
demonstrate how the use of powerful estimation algorithms, based on those commonly used within the target
tracking community, can be extended to the chemical detection arena. Herein our focus is on algorithms that
not only provide accurate estimates of the mixture of analytes in a sample, but also provide robust measures of
ambiguity, such as covariances.
The fusion of Chemical, Biological, Radiological, and Nuclear (CBRN) sensor readings from both point and
stand-off sensors requires a common space in which to perform estimation. In this paper we suggest a common
representational space that allows us to properly assimilate measurements from a variety of different sources
while still maintaining the ability to correctly model the structure of CBRN clouds. We design this space with
sparse measurement data in mind in such a way that we can estimate not only the location of the cloud but also
our uncertainty in that estimate. We contend that a treatment of the uncertainty of an estimate is essential in
order to derive actionable information from any sensor system; especially for systems designed to operate with
minimal sensor data. A companion paper1 further extends and evaluates the uncertainty management introduced
here for assimilating sensor measurements into a common representational space.
The coordinated use of multiple distributed sensors by network communication has the potential to substantially
improve track state estimates even in the presence of enemy countermeasures. In the modern electronic warfare
environment, a network-centric tracking system must function in a variety of jamming scenarios. In some
scenarios hostile electronic countermeasures (ECM) will endeavor to deny range and range rate information,
leaving friendly sensors to depend on passive angle information for tracking. In these cases the detrimental
effects of ECM can be at least partially ameliorated through the use of multiple networked sensors, due to the
inability of the ECM to deny angle measurements and the geometric diversity provided by having sensors in
distributed locations. Herein we demonstrate algorithms for initiating and maintaining tracks in such hostile
operating environments with a focus on maximum likelihood estimators and provide Cramer-Rao bounds on
the performance one can expect to achieve.
In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a
global track picture. Generating this global track picture at a central location is fairly straightforward, but the
single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the
development of decentralized methods. In many decentralized tracking systems, trackers communicate with their
peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical.
Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist;
we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight'
layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network
We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw
inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a
well known technique for resolving transactions across a lossy network, we describe several ways in which one
may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network
intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs.
maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for
only track initiation transactions. Finally, we present simulation results contrasting the performance of such a
system with that of more traditional decentralized tracking implementations.