Road networks and associated traffic flow information are topics that have an innumerable number of applications,
ranging from highway planning to military intelligence. Despite the importance of these networks, archival databases
that often have update rates on the order of years or even decades have historically been the main source for obtaining
and analyzing road network information. This somewhat static view of a potentially changing infrastructure can cause
the information to therefore be incomplete and incorrect. Furthermore, these road databases are not only static, but
rarely provide information beyond a simple two-dimensional view of a road, where divided high-ways are represented in
the same manner as a rural dirt road. It is for these reasons that the use of Ground Moving Target Indicator (GMTI) data
and tracks to create road networks is explored. This data lends itself to being able to not only provide a single static
snapshot of a network that is considered the network for years, but to provide a consistently accurate and updated
changing picture of the environment. The approach employed for creating a road network from GMTI tracks includes a
technique known as Continuous Dynamic Time Warping (CDTW), as well as a general fusion routine.
The tracking of objects and phenomena exhibiting nonlinear motion is a topic that has application in many areas ranging
from military surveillance to weather forecasting. Observed nonlinearities can come not only from the nonlinear
dynamic motion of the object, but also from nonlinearities in the measurement model. Many techniques have been
developed that attempt to deal with this issue, including the development of various types of filters, such as the Extended
Kalman Filter (EKF) and the Unscented Kalman Filter (UKF), variants of the Kalman Filter (KF), as well as other filters
such as the Particle Filter (PF). Determining the effectiveness of any of these techniques in nonlinear scenarios is not
straightforward. Testing needs to be accomplished against scenarios whose degree of nonlinearity is known. This is
necessary if reliable assessments of the effectiveness of nonlinear mitigation techniques are to be accomplished. In this
effort, three techniques were investigated regarding their ability to provide useful measures of nonlinearity for
representative scenarios. These techniques were the Parameter Effects Curvature (PEC), the Normalized Estimation
Error Squared (NEES), and the Normalized Innovation Squared (NIS). Results indicated that the NEES was the most
effective, although it does require truth values in its formulation.
Although the Kalman filter is efficient and effective for computing state estimates of a moving target, it can produce
poor results when tracking a maneuvering target. The problem is that the Kalman filter must employ large plant noise
and/or large tracking gates to keep the target in track. This can result in larger errors in the state estimate as well as
larger uncertainties in these estimates. To track these maneuvering targets, a better approach would be to exploit the
kinematic constraints of the target to restrict the state estimates to only those where the target transition was possible.
Unfortunately, the Kalman filter cannot fully capture the physical constraints of the target motion. To address this
problem, several alternative approaches have been pursued including Kalman filter variants, particle filters, and gridbased
filters. Although grid-based filters can be effective, it seems they have been avoided due to their perceived
exponential computational requirements. A new approach for using a grid-based filter has been developed that can track
targets moving in two dimensions by using a well-confined, two-dimensional grid. As a result, this grid-based approach
is enormously more computationally efficient and can effectively exploit the kinematic constraints of the target. This
paper describes this grid-based filter, along with the inclusion of the kinematically-constrained target motion model. The
paper will then compare the tracking performance of this filter against a Kalman filter for maneuvering target scenarios.
The improved target state estimations from this grid-based filter will be shown and analyzed via Monte Carlo analysis.
In multi-sensor fusion applications, various sources of data are combined to create a coherent situational picture. The
ability to track multiple targets using multiple sensors is an important problem. The data provided by these sensors can
be of varying quality, such as data from RADAR and AIS. Does this varied quality of data negatively impact the
tracking performance when compared to using the best data source alone? From an information-theoretic standpoint, the
answer would be no. However, this paper investigates this issue and exposes a few caveats. In particular, this study
addresses how the relative update rate of varying quality sensors affects tracking performance and answers the question
'Is more data always better?'
This paper presents a multiple interacting multiple model (MIMM) procedure to estimate the state of thrusting/
ballistic projectiles in the atmosphere for the purpose of impact point prediction (IPP). Given a very short
time span of observations, the strong interaction between drag and thrust in the dynamic model, in the sense of
ambiguity in the estimation, significantly affects the estimation performance and the final IPP accuracy. This
leads to the need to use an MIMM estimator with various initial drag coefficient estimates. The modes of each
IMM estimator are for the thrusting and the ballistic phases and different extended Kalman filters (EKF) are
used as the mode-matched filters with different dimension states. A novel unbiased mixing procedure for an IMM
estimator is introduced to deal with state estimates with unequal dimensions, as is the case for the thrusting and
ballistic models. The IPP is carried out at the end of the observation period by using the most probable mode
of the selected IMM estimator, the latter being the one with the highest likelihood in the MIMM approach.
This paper presents the results of an integrated target tracking, pursuit and intercept strategy. It is designed to maximize
the overlap between the engagement envelope of a data linked weapon and the possible predicted locations of an agile
target. Once the target track has been initialized, a Markov Model calculates all possible locations of the target up to the
time of intercept, approximately 30 seconds from launch. These locations and associated probabilities are updated during
the tracking process. This includes target maneuvers, which are detected using an IMM estimator. The engagement
envelope is maximized at fixed points in time. In doing so the intercept decision is delayed until, there is a high
probability of a successful interception.
Combining line-of-sight (LOS) measurements from passive sensors (e.g., satellite-based IR, ground-based cameras,
etc.), assumed to be synchronized, into a single composite Cartesian measurement (full position in 3D) via
maximum likelihood (ML) estimation, can circumvent the need for nonlinear filtering. This ML estimate is
shown to be statistically efficient, and as such, the covariance matrix obtainable from the Cramer-Rao lower
bound provides a consistent measurement noise covariance matrix for use in a target tracking filter.
The most popular and well-studied estimation method is the Kalman filter (KF), which was introduced in the
1960s. It yields a statistically optimal solution for linear estimation problems. The smooth variable structure
filter (SVSF) is a relatively new estimation strategy based on sliding mode theory, and has been shown to be
robust to modeling uncertainties. The SVSF makes use of an existence subspace and of a smoothing boundary
layer to keep the estimates bounded within a region of the true state trajectory. This article discusses the
application of two estimation strategies (the KF and the SVSF) on a multi-target tracking problem.
The Maximum Likelihood Probabilistic Multi-Hypothesis tracker (ML-PMHT) is an algorithm that works well
against low-SNR targets in an active multistatic framework with multiple transmitters and multiple receivers.
The ML-PMHT likelihood ratio formulation allows for multiple targets as well as multiple returns from any
given target in a single scan, which is realistic in a multi-receiver environment where data from different receivers
is combined together. Additionally, the likelihood ratio can be optimized very easily and rapidly with
the expectation-maximization (EM) algorithm. Here, we apply ML-PMHT to two multistatic data sets: the TNO
blind 2008 data set and the Metron 2009 data set. Results are compared with previous work that employed the
Maximum Likelihood Probabilistic Data Assocation (ML-PDA) tracker, an algorithm with a different assignment
algorithm and as a result a different likelihood ratio formulation.
Emerging technologies of high performance computing facilitate increased data collection for wide area sensing;
however, joint data management concepts of operations (CONOPs) are important to fully realize system-level
performance. Joint data management (JDM) includes the hardware (e.g. sensors/targets), software (e.g.
processing/algorithms), entities (e.g. service-based collections), and operations (scenario-based environments) of data
exchange that enable persistent surveillance in the context of a layered sensing or data-to-decision (D2D) information
fusion enterprise. Key attributes of an information fusion enterprise system require pragmatic assessment of data and
information management, distributed communications, knowledge representation as well as a sensor mix, algorithm
choice, life-cycle data management, and human-systems interaction. In this paper, we explore the various issues
surrounding Wide-Area Video Exploitation (WAVE) in a layered-sensing environment to include improvements in Joint
Data Management such as (1) data collection, construction, and transformation, (2) feature generation, extraction and
selection, and (3) information evaluation, presentation, and dissemination. Throughout the paper, we seek to describe
the current technology, research directions, and metrics that instantiate a realizable JDM product. We develop the
methods for joint data management for structured and unstructured WAVE data in the context of decision making.
Discerning accurate track and identification target information from WAVE JDM provides a moving intelligence
In this paper, we compare the information-theoretic metrics of the Kullback-Leibler (K-L) and Renyi (α) divergence
formulations for sensor management. Information-theoretic metrics have been well suited for sensor management as they
afford comparisons between distributions resulting from different types of sensors under different actions. The difference
in distributions can also be measured as entropy formulations to discern the communication channel capacity (i.e.,
Shannon limit). In this paper, we formulate a sensor management scenario for target tracking and compare various
metrics for performance evaluation as a function of the design parameter (α) so as to determine which measures might
be appropriate for sensor management given the dynamics of the scenario and design parameter.
Modern tracking and fusion settings involve multiple platforms in different locations, tracking different target tracks,
focusing on different regions of interest, while using different update rates and sensor resolutions with the goal of
providing increased situation awareness in the region by fusing together the diversity of information from each platform.
In this paper, a decentralized, distributed fusion architecture is presented along with results and trade studies comparing
performance to that of a centralized fusion architecture. The decentralized distributed architecture is designed to work
with legacy tracking systems and uses an efficient message passing scheme to share information and coordinate tracks
across a diverse group of platforms. This system does not rely on a central node and allows for track information to be
maintained at the local level while utilizing track information from other platforms to increase situation awareness. We
compare the performance between our distributed approach and a centralized system using simulated airborne sensors
operating in overlapping regions of interest with target densities and routes chosen to demonstrate tradeoffs between the
different architectures. Preliminary results show that the decentralized distributed system provides similar performance
to the centralized fusion system in terms of situation awareness relative to traditional tracking metrics, but at the cost of
using an increased communication bandwidth to provide frequent updates to neighboring platforms. Results demonstrate
the tradeoff between flexibility and optimality - configuration of the distributed decentralized system to provide
increased flexibility and robustness comes at the cost of reduced situation awareness as compared to the centralized
In multisensor target tracking systems receiving out-of-sequence measurements from local sensors is a common
situation. In the last decade many algorithms have been proposed to update a target state with an OOSM
optimally or suboptimally. However, what one faces in the real world is multiple OOSMs, which arrive at the
fusion center in, generally, arbitrary orders, e.g., in succession or interleaved with in-sequence measurements.
A straightforward approach to deal with this multi-OOSM problem is by sequentially applying a given OOSM
algorithm; however, this simple solution does not guarantee optimal update under the multi-OOSM scenario. The
present paper discusses the differences between the single-OOSM processing and the multi-OOSM processing, and
presents the general solution to the multi-OOSM problem, called the complete in-sequence information (CISI)
approach. Given an OOSM, in addition to updating the target state at the most recent time, the CISI approach
also updates the states between the OOSM time and the most recent time, including the state at the OOSM
time. Three novel CISI methods are developed in this paper: the information filter-equivalent measurement
(IF-EqM) method, the CISI fixed-point smoothing (CISI-FPS) method and the CISI fixed-interval smoothing
(CISI-FIS) method. Numerical examples are given to show the optimality of these CISI methods under various
In many applications where communication delays are present, measurements with earlier time stamps can arrive
out-of-sequence, i.e., after state estimates have been obtained for the current time instant. To incorporate such
an Out-Of-Sequence Measurement (OOSM), many algorithms have been proposed in the literature to obtain or
approximate the optimal estimate that would have been obtained if the OOSM had arrived in-sequence. When
OOSM occurs repeatedly, approximate estimations as a result of incorporating one OOSM have to serve as the
basis for incorporating yet another OOSM. The question of whether the "approximation of approximation" is
well behaved, i.e., whether approximation errors accumulate in a recursive setting, has not been adequately
addressed in the literature. This paper draws attention to the stability question of recursive OOSM processing
filters, formulates the problem in a specific setting, and presents some simulation results that suggest that such
filters are indeed well-behaved. Our hope is that more research will be conducted in the future to rigorously
establish stability properties of these filters.
Multitarget detection and tracking algorithms typically presume that sensors are spatially registered-i.e., that
all sensor states are precisely specified with respect to some common coordinate system. In actuality, sensor
observations may be contaminated by unknown spatial misregistration biases. This paper demonstrates that
these biases can be estimated by exploiting the data collected from a sufficiently large number of unknown
targets, in a unified methodology in which sensor registration and multitarget tracking are performed jointly
in a fully unified fashion. We show how to (1) model single-sensor bias, (2) integrate the biased sensors into a
single probabilistic multiplatform-multisensor-multitarget system, (3) construct the optimal solution to the joint
registration/tracking problem, and (4) devise a principled computational approximation of this optimal solution.
The approach does not presume the availability of GPS or other inertial information.
In this paper we present methods for multimodel filtering of space object states based on the theory of finite state
time nonhomogeneous cadlag Markov processes and the filtering of partially observable space object trajectories.
The state and observation equations of space objects are nonlinear and therefore it is hard to estimate the conditional
probability density of the space object trajectory states given EO/IR, radar or other nonlinear observations. Moreover,
space object trajectories can suddenly change due to abrupt changes in the parameters affecting a perturbing force or
due to unaccounted forces. Such trajectory changes can lead to the loss of existing tracks and may cause collisions
with vital operating space objects such as weather or communication satellites. The presented estimation methods will
aid in preventing the occurrence of such collisions and provide warnings for collision avoidance.
Recent interest in multi-object filtering has focussed on the problem of discrete-time filtering, where sets of
measurements are collected at regular intervals from the sensor. Many sensors do not provide multiple measurements
at regular intervals but instead provide single-measurement reports at irregular time-steps. In this
paper we study the multi-object filtering problem for estimation from measurements where the target and clutter
processes provide measurements with Poisson arrival rates. In particular, we show that the Probability Hypothesis
Density (PHD) filter can be adapted to Poisson arrival rate measurements by modelling the probability of
detection with an exponential distribution. We demonstrate the approach in simulated scenarios.
This paper considers the effect of sensor ordering on the iterated-corrector PHD update. It is known that
changing the order of the updates results in different PHDs, however, these are usually not significantly different.
This paper considers a multisensor scenario using a single poor quality sensor in combination with good sensors,
where the bad sensor is modelled using a low probability of detection. It is shown that the quality of the updated
PHD varies significantly depending on whether the sensor is used first or last in the iterated-corrector update.
The degradation in performance of the iterated PHD filter is illustrated using a comparison of different
multisensor configurations. The OSPA error is shown to be greatest when a sensor with low probability of
detection is used in the final update of the iterated form of the PHD filter. The performance of the productmultisensor
PHD filter is also considered. The product multisensor filter is shown to perform significantly better
due to invariance to sensor ordering.
Since the derivation of PHD filter, a number of track management schemes have been proposed to adapt the PHD filter for
determining the tracks of multiple objects. Nevertheless, the problem remains that such approaches can fail when targets
are too close or are crossing. In this paper, we propose to improve the tracking by maintaining a set of locally-based
trackers and managing the tracks with an assignment method. Furthermore, the new algorithm is based on a Gaussian
mixture implementation of the CPHD filter, by clustering neighbouring Gaussians before the update step and updating
each cluster with the CPHD filter update. In order to be computationally efficient, the algorithm includes gating techniques
for the local trackers and constructs local cardinality distributions for the targets and clutter within the gated regions. An
improvement in multi-object estimation performance has been experienced on both synthetic and real IR data scenarios.
The Set JPDA (SJPDA) filter is a recently developed multi-target tracking filter that utilizes the relation
between the density of a random finite set and the ordinary density of a state vector to improve on the Joint
Probabilistic Data Association (JPDA) filter. One advantage to the filter is the improved accuracy of the
Gaussian approximations of the JPDA, which results in avoidance of track coalescence. Another advantage is an
improved estimation accuracy in terms of a measure which disregards target identity. In this paper we extend the
filter to also consider multiple motion models. As a basis for the extension we use the Interacting Multiple Model
(IMM) algorithm. We derive three alternative filters that we jointly refer to as Set IMMJPDA (SIMMJPDA).
They are based on two alternative descriptions of the IMMJPDA filter. In the paper, we also present simulation
results for a two-target tracking scenario, which show improved tracking performance for the Set IMMJPDA
filter when evaluated with a measure that disregards target identity.
Bayes' rule and Dempster's combination are typically presumed to be radically different procedures for fusing
evidence. This paper demonstrates that measurement-update using Dempster's combination is a special case
of measurement-update using Bayes' rule. The demonstration is based on an analogy with the Kalman filter.
Suppose that the data consists of linear-Gaussian point measurements. Then ask, What additional assumptions
must be made so that the Bayes filter can be solved in algebraically closed form? The Kalman filter is the
result. In similar fashion, suppose that the data consists of measurements that are "uncertain" in a Dempster-
Shafer sense. Then ask, What additional assumptions must be made so that the Bayes filter can be solved in
algebraically closed form? Dempster's combination turns out to be the result. Stated differently: Both the
Kalman measurement-update equations and Dempster's combination are corrector steps of the recursive Bayes
filter, given that it has been restricted to two different types of measurements.
Situational awareness in a Persistent Surveillance System (PSS) can be significantly improved by fusion of Data from
physical (Hard) sensors and information provided by human observers (as Soft/biological sensors) from the field. One of
the major limitations that this trend brings about is, however, the integration and fusion of the sensory data collected from
hard sensors along with soft data gathered from human agents in a consistent and cohesive way. This paper presents a
proposed approach for semantic labeling of vehicular non-stationary acoustic events in the context of PSS. Two techniques
for feature extraction based on discrete wavelet and short-time Fourier transforms are described. A correlation-based
classifier is proposed for classifying and semantic labeling of vehicular acoustic events. The presented result demonstrates
the proposed solution is both reliable and effective, and can be extended to future PSS applications.
A new BN structure learning method using a cloud-based adaptive immune genetic algorithm (CAIGA) is
proposed. Since the probabilities of crossover and mutation in CAIGA are adaptively varied depending on X-conditional
cloud generator, it could improve the diversity of the structure population and avoid local optimum. This is due to the
stochastic nature and stable tendency of the cloud model. Moreover, offspring structure population is simplified by using
immune theory to reduce its computational complexity. The experiment results reveal that this method can be effectively
used for BN structure learning.
In addition to computing the posterior distributions for hidden variables in Bayesian networks, one other important inference task is to find the most probable explanation (MPE). MPE provides the most likely configurations
to explain away the evidence and helps to manage hypotheses for decision making. In recent years, researchers
have proposed a few methods to find the MPE for discrete Bayesian networks. However, finding the MPE for
hybrid networks remains challenging. In this paper, we first briefy review the current state-of-the-art in the literature regarding various explanation methods. We then present an algorithm by using a modified max-product
clique tree to find the MPE for accommodating the needs in hybrid Bayesian networks. A detailed example is
demonstrated to show the algorithm.
Underwater mines are inexpensive and highly effective weapons. They are difficult to detect and classify. Hence
detection and classification of underwater mines is essential for the safety of naval vessels. This necessitates a
formulation of highly efficient classifiers and detection techniques. Current techniques primarily focus on signals from
one source. Data fusion is known to increase the accuracy of detection and classification. In this paper, we formulated a
fusion-based classifier and a Gaussian mixture model (GMM) based classifier for classification of underwater mines.
The emphasis has been on sound navigation and ranging (SONAR) signals due to their extensive use in current naval
operations. The classifiers have been tested on real SONAR data obtained from University of California Irvine (UCI)
repository. The performance of both GMM based classifier and fusion based classifier clearly demonstrate their superior
classification accuracy over conventional single source cases and validate our approach.
A classification system such as an Automatic Target Recognition (ATR) system might yield better performance when fused
sequentially than in parallel. Most fused systems have parallel architecture, but, the medical community often uses sequential
tests due to costs constraints. We define the different types of sequential fusion and investigate their characteristics.
We compare parallel fused systems with sequential fused systems. Another goal of this paper is to compare competing sequential
fused systems to arrive at an optimal architecture design given the systems at hand. These systems may be legacy
systems whose performances are well known. If these systems have known Receiver Operating Characteristic (ROC)
curves/manifolds then we derive a formula that yields the ROC curve/manifold for the resultant sequentially fused system,
thus, enabling one to make these comparisons. This formula is distribution free. We give an example to demonstrate the
utility of our method, and show that one can play "what if" scenarios.
We have solved the well known and important problem of
particle degeneracy for particle filters. Our filter is
roughly seven orders of magnitude faster than standard
particle filters for the same estimation accuracy. The new
filter is four orders of magnitude faster per particle, and it
requires roughly three orders of magnitude fewer particles
to achieve the same accuracy as a standard particle filter.
Typically we beat the EKF or UKF accuracy by
approximately two orders of magnitude for difficult
In this paper we show five movies of particle flow to
provide insight and intuition about this new algorithm.
The particles flow solves the well known and important
problem of particle degeneracy. Bayes' rule is
implemented by particle flow rather than as a pointwise
multiplication. This theory is roughly seven orders of
magnitude faster than standard particle filters, and it often
beats the extended Kalman filter by two orders of
magnitude in accuracy for difficult nonlinear problems.
The Benes filtering problem has been shown to be related to the quantum mechanical simple harmonic oscillator.
In a previous paper, the exact fundamental solution for the filtering problem was derived. The methods employed
included the method of separation of variables for solving PDEs, results from Strum-Liouville theory, and
properties of the Hermite special function. In this paper, the results are rederived more simply and directly using
Feynman path integral methods. Numerical examples are included that demonstrate the correctness of formulas
and their utility in solving continuous-discrete filtering problems with Benes drift and nonlinear measurement
The output of a GMTI tracker (such as the VS-IMM) over an extended period of time can be viewed as generating
a string sequence (namely the mode sequence) that defines the trajectory. In previous work, it was demonstrated
with real data that the target trajectory could be (probabilistically) parsed in real-time, assuming any string
sequence output from the tracker could arise only from a stochastic context-free grammar (SCFG). In this
paper, an GMTI data processing chain, with a view towards the application to syntactic parsing, is presented.
An emphasis is placed on the Bayesian formualtions, which provides a unified description of the processing
The Set JPDA (SJPDA) filter is a recently developed multi-target tracking filter that utilizes the relation
between the density of a random finite set and the ordinary density of a state vector to improve on the Joint
Probabilistic Data Association (JPDA) filter. One advantage to the filter is the improved accuracy of the Gaussian
approximations of the JPDA, which result in avoidance of track coalescence. In the original presentation of the
SJPDA filter, the focus was on problems where target identity is not relevant, and it was shown that the filter
performs better than the JPDA filter for such problems. The improved performance of the SJPDA is due to
its relaxation of the labeling constraint that hampers most tracking approaches. However, if track identity is
of interest a record of it may be kept even with a label-free approach such as the SJPDA: label-free targets are
localized via the SJPDA, and then the identities are recalled as an overlay.
For decades, there have been discussions on measures of merits (MOM) that include measures of effectiveness (MOE)
and measures of performance (MOP) for system-level performance. As the amount of sensed and collected data becomes
increasingly large, there is a need to look at the architectures, metrics, and processes that provide the best methods for
decision support systems. In this paper, we overview some information fusion methods in decision support and address
the capability to measure the effects of the fusion products on user functions. The current standard Information Fusion
model is the Data Fusion Information Group (DFIG) model that specifically addresses the needs of the user in an
information fusion system. Decision support implies that information methods augment user decision making as opposed
to the machine making the decision and displaying it to user. We develop a list of suggested measures of merits that
facilitate decision support decision support Measures of Effectiveness (MOE) metrics of quality, information gain,
and robustness, from the analysis based on the measures of performance (MOPs) of timeliness, accuracy,
confidence, throughput, and cost. We demonstrate in an example with motion imagery to support the MOEs of quality
(time/decision confidence plots), information gain (completeness of annotated imagery for situation awareness), and
robustness through analysis of imagery over time and repeated looks for enhanced target identification confidence.
The paper defines three distinct classes of binary fusion, extending an evolving first-principles-based
theoretical fusion framework under development for several years. The paper focuses on non-traditional data sources
due to its relevance to the development of a comprehensive fusion theory. Three fusion classes are defined and
discussed relative to both conventional hard target and text-based information fusion applications. The concept of
entity specificity is then introduced to generalize the three-class binary fusion problem. Finally, fusion class 1 and 2
products from a prototype fusion system developed for the US Department of the Army are presented to clarify the
concepts; class-3 fusion applications to soft data will be addressed in a future paper.
Answering the questions "What can the adversary do?" and "What will the adversary do?" are critical functions of
intelligence analysis. These questions require processing many sources of information, which is currently performed
manually by analysts, leading to missed opportunities and potential mistakes. We have developed a system for
Assessment of Capability and Capacity via Intelligence Analysis (ACACIA) to help analysts assess the capability,
capacity, and intention of a nation state or non-state actor. ACACIA constructs a Bayesian network (BN) to model the
objectives and means of an actor in a situation. However, a straightforward BN implementation is insufficient, since
objectives and means are different in every situation. Additionally, we wish to apply knowledge about an element gained
from one situation to another situation containing the same element. Furthermore, different elements of the same kind
usually share the same model structure with different parameters. We use the probabilistic programming language
Figaro, which allows models to be constructed using the power of programming languages, to address these issues,
generating BNs for diverse situations while maximizing sharing. We learn the parameters of a program from training
instances. Experiments show ACACIA is capable of making accurate inferences and that learning effectively improves
This paper investigates effects of operation parameters on multitarget tracking in proximity sensor networks. In
such a network, the sensors report a detection when a target is within the proximity; otherwise, the sensors report
no detection. Previous work has revealed the potential of multitarget tracking via the particle-based probability
hypothesis density (PHD) filter when incorporating these binary reports. This work investigates how the sensor
density, sensing range, and target separation affect the ability of the PHD filter to estimate the number of targets
in the scene and to localize these targets (as measured by four different metrics). Two possible measurement
models are considered. The disc model assumes target detection within a sensing radius, and the probabilistic
model assumes 1/rα propagation decay of the source signal so that the probability of detection decreases with
range r. The simulations demonstrate the simplistic disc model is inadequate for the PHD filter to estimate the
number of targets, and the filter for the disc model exhibits difficulty to localize widely separated targets for
low sensor densities. On the other hand, the more realistic probabilistic model leads to a PHD filter that can
accurately estimate the number and locations of targets even for small target separations.
The problem of Track-to-Track Fusion (T2TF) is very important for distributed tracking systems. It allows the
use of the hierarchical fusion structure, where local tracks are sent to the fusion center (FC) as summaries of
local information about the states of the targets, and fused to get the global track estimates. Compared to the
centralized measurement-to-track fusion (CTF), the T2TF approach has low communication cost and is more
suitable for practical implementation. Although having been widely investigated in the literature, most T2TF
algorithms dealt with the fusion of homogenous tracks that have the same state of the target. However, in
general, local trackers may use different motion models for the same target, and have different state spaces. This
raises the problem of Heterogeneous Track-to-Track Fusion (HT2TF). In this paper, we propose the algorithm for
HT2TF based on the generalized Information Matrix Fusion (GIMF) to handle the fusion of heterogenous tracks
in the presence of possible communication delays. Compared to the fusion based on the LMMSE criterion, the
proposed algorithm does not require the crosscovariance between the tracks for the fusion, which greatly simplify
its implementation. Simulation results show that the proposed HT2TF algorithm has good consistency and
Tracking process captures the state of an object. The state of an object is defined in terms of its dynamic and static
properties such as location, speed, color, temperature, size, etc. The set of dynamic and static properties for tracking very
much depends on the agency who wants to track. For example, police needs different set of properties to tracks people
than to track a vehicle than the air force. The tracking scenario also affects the selection of parameters. Tracking is done
by a system referred to in this paper as "Tracker." It is a system that consists of a set of input devices such as sensors and
a set of algorithms that process the data captured by these input devices. The process of tracking has three distinct steps
(a) object discovery, (b) identification of discovered object, and (c) object introduction to the input devices. In this paper
we focus mainly on the object discovery part with a brief discussion on introduction and identification parts. We
develops a formal tracking framework (model) called "Discover, Identify, and Introduce Model (DIIM)" for building
efficient tracking systems. Our approach is heuristic and uses reasoning leading to learning to develop a knowledge base
for object discovery. We also develop a tracker for the Air Force system called N-CET.
Ad-hoc networks of simple, omni-directional sensors present an attractive solution to low-cost, easily deployable, fault
tolerant target tracking systems. In this paper, we present a tracking algorithm that relies on a real time observation of the
target power, received by multiple sensors. We remove target position dependency on the emitted target power by taking
ratios of the power observed by different sensors, and apply the natural logarithm to effectively transform to another
coordinate system. Further, we derive noise statistics in the transformed space and demonstrate that the observation in
the new coordinates is linear in the presence of additive Gaussian noise. We also show how a typical dynamic model in
Cartesian coordinates can be adapted to the new coordinate system. As a consequence, the problem of tracking target
position with omni-directional sensors can be adapted to the conventional Kalman filter framework. We validate the
proposed methodology through simulations under different noise, target movement, and sensor density conditions.
Reducing the number of sensors in a sensor network is of great interest for a variety of surveillance and target
tracking scenarios. The time and resources needed to process the data from additional sensors can delay reaction
time to immediate threats and consume extra financial resources. There are many methods to reduce the number
of sensors by considering hardware capabilities alone. However, by incorporating an estimate of environment
and agent dynamics, sensor reduction for a given scenario may be achieved using Bellman optimality principles.
We propose a method that determines the capture regions where sensors can be eliminated. A capture region is
defined as a section of the surveillance field, where using a causal relationship to the other sensors, an event may
be determined using fast marching semi-Lagrangian (FMSL) solution techniques. This method is applied to a
crowded hallway scenario with two possible exits, one primary, and one alternate. It is desired to determine if a
target deviates from the crowd and moves toward the alternate exit. A proximity sensor grid is placed above the
crowd to record the number of people that pass through the hallway. Our result shows that the Bellman optimal
approximation of the capture set for the alternate exit identifies the region of the surveillance field where sensors
are needed, allowing the others to be removed.
Spatially distributed network of sensor nodes with onboard seismic and acoustic sensors is an important class of
emerging networked systems for various security applications. One of a main task in these applications is target
detection. In order to achieve this with improved accuracy, it is necessary that sensors process and share information
efficiently. This paper presents a novel approach to fuse the data of acoustic and seismic sensors based on correlation
measures so that high detection range and/or detection rate can be achieved. This method calculates the weighted values
of both the sensors and the values of their weights are adjusted as the change of correlation measures is observed. It
gives greater weighted value to the greater correlation measure of the sensor signals, and vice versa. One of the
advantages of this method is that the weights of the sensors are adjusted dynamically for real-time data. The method does
not depend on prior information of the sensor's data. This method considers the limited range of both the acoustic and
the seismic sensors and fuses the signals in terms of maximum possible detection range. In case of failure of one of the
sensor, the method still provides the target information.
Sub-pixel registration is critical in object tracking and image super-resolution. Motion segmentation algorithms using the
gradient can be applied prior to image registration to improve its accuracy and computational runtime. This paper
proposes a new segmentation method that is adaptive variation segmentation in the form of local variances taken at
different block sizes to be applied to the sum of absolute image differences. In this paper, two motion segmentation and
four image registration methods are tested to optimize the registration accuracy in visible and thermal imagery. Two
motion segmentation methods, flux tensor and adaptive variation segmentation, are quantitatively tested by comparing
calculated regions of movement with accepted areas of motion. Four image registration methods, including two optical
flow, feature correspondence, and correlation methods, are tested in two steps: gross shift and sub-pixel shift
estimations. Gross shift estimation accuracy is assessed by comparing estimated shifts against a ground truth. Sub-pixel
shift estimation accuracy is assessed by simulated, down-sampled images. Evaluations show that the best segmentation
results are achieved using either the flux tensor or adaptive segmentation methods. For well-defined objects, feature
correspondence and correlation registration produce the most accurate gross shift registrations. For not well-defined
objects, the correlation method produces the most accurate gross and sub-pixel shift registration.
Imagery analysis systems utilize Automatic Target Recognition (ATR) methods in order to improve the accuracy of
human-based analysis and save time. Often, ATR methods perform poorly in obtaining these objectives, due to reliance
on outdated prior information, while human operators possess updated information that remains unused. This paper
presents an interactive target recognition (or ITR) application. The operator marks sample target pixels by an intuitive
user-interface. Then machine-learning techniques generate algorithms tailored for their recognition in imagery. The
resulting detection map is dynamically controlled by the operator, suiting his needs. The application enables target
recognition in zero prior information environments.
The Rapid Serial Visual Presentation (RSVP) protocol for EEG has recently been discovered as a useful tool for highthroughput
filtering of images into simple target and nontarget categories . This concept can be extended to the
detection of objects and anomalies in images and videos that are of interest to the user (observer) in an applicationspecific
context. For example, an image analyst looking for a moving vehicle in wide-area imagery will consider such an
object to be target or Item Of Interest (IOI). The ordering of images in the RSVP sequence is expected to have an impact
on the detection accuracy. In this paper, we describe an algorithm for learning the RSVP ordering that employs a user
interaction step to maximize the detection accuracy while simultaneously minimizing false alarms. With user feedback,
the algorithm learns the optimal balance of image distance metrics in order to closely emulate the human's own
preference for image order. It then employs the fusion of various perceptual and bio-inspired image metrics to emulate
the human's sequencing ability for groups of image chips, which are subsequently used in RSVP trials. Such a method
can be employed in human-assisted threat assessment in which the system must scan a wide field of view and report any
detections or anomalies to the landscape. In these instances, automated classification methods might fail. We will
describe the algorithm and present preliminary results on real-world imagery.
This work considers the problem of combining high dimensional data acquired from multiple sensors for the
purpose of detection and classification. The sampled data are viewed as a geometric object living in a highdimensional
space. Through an appropriate, distance preserving projection, those data are reduced to a lowdimensional
space. In this reduced space it is shown that different physics of the sampled phenomena reside on
different portions of the resulting "manifold" allowing for classification. Moreover, we show that data acquired
from multiple sources collected from the same underlying physical phenomenon can be readily combined in the
low-dimensional space i.e. fused. The process is demonstrated on maritime imagery collected from a visible-band
This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted
shape and object texture information, for application areas like alerting, recognition and tracking. Targets are
extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done
by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas
involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware
components (CPU, camera and GPS).
Recently developed millimeter wave radar has advantages for target identification over conventional microwave
radar which typically use lower frequencies. We describe the pertinent features involved in the construction
of the new millimeter wave radar, the pseudo-optical cavity source and the quasi-optical duplexer. The long
wavelength relative to light allows the radar beam to penetrate through most weather because the wavelength is
larger than the particle size for dust, drizzle rain, fog. Further the mm wave beam passes through an atmospheric
transmission window that provides a dip in attenuation. The higher frequency than conventional radar provides
higher Doppler frequencies, for example, than X-band radar. We show by simulation that small characteristic
vibrations and slow turns of an aircraft become visible so that the Doppler signature improves identification.
The higher frequency also reduces beam width, which increases transmit and receive antenna gains. For the
same power the transmit beam extends to farther range and the increase in receive antenna gain increases signal
to noise ratio for improved detection and identification. The narrower beam can also reduce clutter and reject
other noise more readily. We show by simulation that the radar can be used at lower elevations over the sea than
Optical (visual) tracking is an important research area in computer vision with a wide range of useful and critical
applications in defence and industry. The tracking of targets that pose a threat or potential threat to a country's
assets and resources is a critical component in defence and security. In order to complement radar sensing
applications, an optical tracker provides additional functions such as target detection, target identification and
intent detection at the visual level. A tracker for the maritime environment is an optical system that performs
the automatic tracking of an above water target. Ideally, a track of the target is required for as long as is
possible. Some examples of targets include boats, yachts, ships, jet-skis and aircraft. A number of factors
mitigate the performance of such a system - change in target appearance, target occlusions, platform vibration
and scintillation in the atmosphere are some common examples. We present the implementation of a firstgeneration
system that is robust to platform vibration, target appearance changes and short-term occlusions.
The optical tracker is developed using a particle filter and an appearance model that is updated online. The
system achieves real-time tracking through the use of non-specialized computer hardware. Promising results are
presented for a number of real-world videos captured during field trials.
Intelligent vehicles have many applications in the military, aerospace, and other industries, including land-mine
detection for the military, patient transportation in hospitals, and many other domains that often require automation
to reduce risks to the human operators. One of important tasks of intelligent vehicles is the navigation, whose goal is
to extract and determine the appropriate path that leads to a destination based on perceived environmental
information. The objective of our work is to develop a simple and effective method to detect and extract road lanes
and boundaries. We propose a solution by incorporating the planar information of road surfaces. We first detect all
possible edges in the captured images. The straight lanes and boundaries are extracted as straight lines, which
generate a vanishing point. The straight lines are described with Hough transform. A cluster analysis in Hough
space is used to detect the vanishing point on road. Further, we search lines passing through the vanishing point
from 180 degrees to 270 degrees and from 0 degree to negative 90 degrees. The first two strong lines will be
extracted as road boundaries.
Existing computer simulations of aircraft infrared signature do not account for the dispersion induced by uncertainty
on input data, such as aircraft aspect angles and meteorological conditions. As a result, they are of little
use to estimate the detection performance of IR optronic systems: in that case, the scenario encompasses a lot
of possible situations that can not be singly simulated. In this paper, we focus on low resolution infrared sensors
and we propose a methodological approach for predicting simulated infrared signature dispersion of poorly
known aircraft, and performing aircraft detection and classification on the resulting set of low resolution infrared
images. It is based on a Quasi-Monte Carlo survey of the code output dispersion, on a new detection test taking
advantage of level sets estimation, and on a maximum likelihood classification taking advantage of Bayesian
dense deformable template models estimation.
Small and medium sized UAVs like German LUNA have long endurance and define in combination with sophisticated
image exploitation algorithms a very cost efficient platform for surveillance. At Fraunhofer IOSB, we have
developed the video exploitation system ABUL with the target to meet the demands of small and medium sized
UAVs. Several image exploitation algorithms such as multi-resolution, super-resolution, image stabilization, geocoded
mosaiking and stereo-images/3D-models have been implemented and are used with several UAV-systems.
Among these algorithms is the moving target detection with compensation of sensor motion. Moving objects
are of major interest during surveillance missions, but due to movement of the sensor on the UAV and small
object size in the images, it is a challenging task to develop reliable detection algorithms under the constraint of
real-time demands on limited hardware resources. Based on compensation of sensor motion by fast and robust
estimation of geometric transformations between images, independent motion is detected relatively to the static
background. From independent motion cues, regions of interest (bounding-boxes) are generated and used as
initial object hypotheses. A novel classification module is introduced to perform an appearance-based analysis of
the hypotheses. Various texture features are extracted and evaluated automatically for achieving a good feature
selection to successfully classify vehicles and people.
In this paper, we investigate the multiple hypothesis problems of target detection and tracking
in sensor systems. In many practical situations, the observational data may be expensive to acquire
and the speed of decision can be affected by unnecessary amount of observational data. Motivated
by the importance of accuracy and efficiency of sensor systems, we propose novel adaptive statistical
inferential methods to reduce the amount of required observational data while achieving acceptable
level of accuracy. Toward this goal, we propose adaptive methods in the general framework of testing
multiple hypotheses for the detection and classification problems. The feasibility and optimality of the
methods have been established.