PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8137, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This papers explores the use of an error metric based on intensity gradients in an automatic camera pose recovery
method for 2D-3D image registration. The method involves extraction of lines from the 3D image and then uses
intensity gradients to register these onto the 2D image. This approach have overcome the limitations of matching
the features to register the 2D-3D images. The goal of our algorithm is to estimate pose parameters without
any apriori knowledge (GPS) and in less processing time. We demonstrated the validity of our approach by
experimenting on perspective view using lines as feature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study describes a hybrid processing algorithm for use during calibration/validation of near-infrared spectroscopic signals based on a spectra cross-correlation and filtering process, combined with a partial-least square regression (PLS) analysis. In the first step of the algorithm, exceptional signals (outliers) are detected and remove based on spectra correlation criteria we have developed. Then, signal filtering based on direct orthogonal signal correction (DOSC) was applied, before being used in the PLS model, to filter out background variance. After outlier screening and DOSC
treatment, a PLS calibration model matrix is formed. Once this matrix has been built, it is used to predict the concentration of the unknown samples. Common statistics such as standard error of cross-validation, mean relative error, coefficient of determination, etc. were computed to assess the fitting ability of the algorithm Algorithm performance was tested on several hundred blood samples prepared at different hematocrit and glucose levels using
blood materials from thirteen healthy human volunteers. During measurements, these samples were subjected to variations in temperature, flow rate, and sample pathlength. Experimental results highlight the potential, applicability, and effectiveness of the proposed algorithm in terms of low error of prediction, high sensitivity and specificity, and low false negative (Type II error) samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
False alarms generated by sensors pose a substantial problem to a variety of fusion applications. We focus
on situations where the frequency of a genuine alarm is "rare" but the false alarm rate is high. The goal is
to mitigate the false alarms while retaining power to detect true events. We propose to utilize data streams
contaminated by false alarms (generated in the field) to compute statistics on a single sensor's misclassification
rate. The nominal misclassification rate of a deployed sensor is often suspect because it is unlikely that these
rates were tuned to the specific environmental conditions in which the sensor was deployed. Recent categorical
measurement error methods will be applied to the collection of data streams to "train" the sensors and provide
point estimates along with confidence intervals for the parameters characterizing sensor performance. By pooling
a relatively small collection of random variables arising from a single sensor and using data-driven misclassification
rate estimates along with estimated confidence bands, we show how one can transform the stream of
categorical random variables into a test statistic with a limiting standard normal distribution. The procedure
shows promise for normalizing sequences of misclassified random variables coming from different sensors (with
a priori unknown population parameters) to comparable test statistics; this facilitates fusion through various
downstream processing mechanisms. We have explored some possible downstream processing mechanisms that
rely on false discovery rate (FDR) methods. The FDR methods exploit the test statistics we have computed in a
chemical sensor fusion context where reducing false alarms and maintaining substantial power is important. FDR
methods also provide a framework to fuse signals coming from non-chem/bio sensors in order to improve performance.
Simulation results illustrating these ideas are presented. Extensions, future work and open problems are
also briefly discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years session we presented devices and methods for laser based trajectory measurement of hand gun
bullets, which also included the estimation of light scatter from such bodies in free flight. While our first approach
for gathering the scatter qualities of arbitrary projectiles using metrological image processing resulted in a polar
fourier series of the outer contour of the projectile, which could be used to calculate the scatter behavior, the
question of determining the contour as an parameterized ogive came up, which then could also be used to
determine the type of projectile literally on the fly, assumed high enough image resolution. The present paper is
understood as a continuation of last years papers with focus on the scatter characteristics of ogive shaped bodies
and the metrological identification of the ogives of projectiles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In EO tracking, target spatial and spectral features can be used to improve performance since they help distinguish the
targets from each other when confusion occurs during normal kinematic tracking. In this paper we introduce a method
to encode a target's descriptive spatial information into a multi-dimensional signature vector, allowing us to convert the
problem of spatial template matching into a form similar to spectral signature matching. This allows us to leverage
multivariate algorithms commonly used with hyperspectral data to the problem of exploiting panchromatic imagery. We
show how this spatial signature formulation naturally leads to a hybrid spatial-spectral descriptor vector that supports
exploitation using commonly-used spectral algorithms.
We introduce a new descriptor called Spectral DAISY for encoding spatial information into a signature vector, based on
the concept of the DAISY dense descriptor. We demonstrate the process on real data and show how the combined
spatial/spectral feature can be used to improve target/track association over spectral or spatial features alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frequency modulated continuous wave (FMCW) radar have become common place in many roadside trac and
on board vehicle safety systems. The accuracy in these systems is based on the underlying calibration of these
sensors, which can be a time consuming and costly process. In our approach, using an uncalibrated commercial-
o-the-shelf (COTS) radar sensor, vehicles were monitored along a roadside. A moving target indication (MTI)
technique is used to reduce background clutter with thresholding and CFAR techniques used for signal detection.
These detections are fed into an extended Kalman lter, and using dierent association approaches, the results
are compared to GPS ground truth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation decomposes a given image into segments, i.e. regions containing "similar" pixels, that aids
computer vision applications such as face, medical, and fingerprint recognition as well as scene characterization.
Effective segmentation requires domain knowledge or strategies for object designation as no universal segmentation
algorithm exists. In this paper, we propose a similarity based image segmentation approach based on game theory
methods. The essential idea behind our approach is that the similarity based clustering problem can be considered as a
"clustering game". Within this context, the notion of a cluster turns out to be equivalent to a classical equilibrium
concept from game theory, as the game equilibrium reflects both the internal and external cluster conditions. We also
show that there exists a correspondence between these equilibriums and the local solutions of a polynomial, linearlyconstrained,
optimization problem, and provide an algorithm for finding the equalibirums. Experiments on image
segmentation problems show the superiority of the proposed clustering game image segmentation (CGIS) approach
using a common data set of visual images in autonomy, speed, and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a design method for sparse optimal Finite Impulse Response (FIR) filters that improve the visibility
of a desired stochastic signal corrupted with white Gaussian noise. We emphasize that the filters we seek are
of high-order but sparse, thus significantly reducing computational complexity. An optimal FIR filter for the
estimation of a desired signal corrupted with white noise can be designed by maximizing the signal-to-noise ratio
(SNR) of the filter output with the constraint that the magnitude (in 2-norm) of the FIR filter coefficients are
set to unity.1, 2 This optimization problem is in essence maximizing the Rayleigh quotient and is thus equivalent
to finding the eigenvector with the largest eigenvalue.3 While such filters are optimal, they are rarely sparse. To
ensure sparsity, one must introduce a cardinality constraint in the optimization procedure. For high order filters
such constraints are computationally burdensome due to the combinatorial search space. We relax the cardinality
constraint by using the 1-norm approximation of the cardinality function. This is a relaxation heuristic similar to
the recent sparse filter design work of Baran, Wei, and Oppenheim.4 The advantage of this relaxation heuristic is
that the solutions tend to be sparse and the optimization procedure reduces to a convex program, thus ensuring
global optimality. In addition to our proposed optimization procedure for deriving sparse FIR filters, we show
examples where sparse high-order filters significantly perform better than low-order filters, whereas complexity
is reduced by a factor of 10.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have invented a new theory of exact particle flow for nonlinear filters. The flow of particles corresponding to Bayes'
rule is computed from the gradient of the solution of Poisson's equation, and it is analogous to Coulomb's law. Our
theory is a radical departure from other particle filters in several ways: (1) we compute Bayes' rule using a flow of
particles rather than as a pointwise multiplication; (2) we never resample particles; (3) we do not use a proposal density;
(4) we do not use importance sampling or any other MCMC algorithm; and (5) our filter is roughly 6 to 8 orders of
magnitude faster than standard particle filters for the same accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many target tracking applications, estimation of target position and velocity is performed in Cartesian coordinates.
Use of Cartesian coordinates for estimation stands in contrast to the measurements, which are traditionally
the range, azimuth and elevation measurements of the spherical coordinate system. It has been shown in previous
works that the classical nonlinear transformation from spherical to Cartesian coordinates introduces a bias in
the position measurement. Various means to negate this bias have been proposed. In many active sonar and
radar applications, the sensor also provides a Doppler, or equivalently range rate, measurement. Use of Doppler
in the estimation process has also been proposed by various authors. First, the previously proposed unbiased
conversions are evaluated in dynamic situations, where the performance of the tracking filter is affected by the
correlation between the filter gains and the errors in the converted position measurements. Following this, the
"decorrelated unbiased converted measurement" approach is presented and shown to be superior to the previous
approaches. Second, an unbiased conversion is derived for Doppler measurements from a moving platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show numerical results for a new nonlinear filtering algorithm that is analogous to Coulomb's law. We have
invented a new theory of exact particle flow for nonlinear filters. The flow of particles corresponding to Bayes' rule is
computed from the gradient of the solution of Poisson's equation, and it is analogous to Coulomb's law. Our theory is a
radical departure from other particle filters in several ways: (1) we compute Bayes' rule using a flow of particles rather
than as a pointwise multiplication; (2) we never resample particles; (3) we do not use a proposal density; (4) we do not
use importance sampling or any other MCMC algorithm; and (5) our filter is roughly 6 to 8 orders of magnitude faster
than standard particle filters for the same accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses how methods used for conventional multiple hypothesis tracking (MHT) can be extended to
domain-agnostic tracking of entities from non-kinematic constraints such as those imposed by cyber attacks in a
potentially dense false alarm background. MHT is widely recognized as the premier method to avoid corrupting tracks
with spurious data in the kinematic domain but it has not been extensively applied to other problem domains. The
traditional approach is to tightly couple track maintenance (prediction, gating, filtering, probabilistic pruning, and target
confirmation) with hypothesis management (clustering, incompatibility maintenance, hypothesis formation, and Nassociation
pruning). However, by separating the domain specific track maintenance portion from the domain agnostic
hypothesis management piece, we can begin to apply the wealth of knowledge gained from ground and air tracking
solutions to the cyber (and other) domains. These realizations led to the creation of Raytheon's Multiple Hypothesis
Extensible Tracking Architecture (MHETA).
In this paper, we showcase MHETA for the cyber domain, plugging in a well established method, CUBRC's
INFormation Engine for Real-time Decision making, (INFERD), for the association portion of the MHT. The result is a
CyberMHT. We demonstrate the power of MHETA-INFERD using simulated data. Using metrics from both the
tracking and cyber domains, we show that while no tracker is perfect, by applying MHETA-INFERD, advanced nonkinematic
tracks can be captured in an automated way, perform better than non-MHT approaches, and decrease analyst
response time to cyber threats.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we address the problem of robust detection of dismounts from low-resolution video data sequences. We
outline a methodology based on SSCI's ultra-fast image alignment algorithm, and a combination of static and kinematic
features for dismount detection. We perform the dismount detection classification using a learning classifier algorithm.
Our results are promising and very valuable for low-resolution imagery where previous techniques for dismount
detection such as SURF and SIFT features do not perform very well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method for tracking dismounts/humans in a potentially dense clutter background. The proposed
approach uses Multiple Hypothesis Tracking (MHT) for data association and Interacting Multiple Model (IMM)
filtering. The problem is made difficult by the presence of random and persistent clutter, such as produced by moving
tree branches. There may also be moving targets (such as vehicles and animals) that are not of interest to the user of the
tracking system, but that must be tracked in order to separate these targets from the targets of interest. Thus, a joint
tracking and identification method has been developed to utilize the features that are associated with dismount targets.
This method uses a Dempster-Shafer (D-S) approach to combine feature data to determine the target type (dismount
versus other). Feature matching is also included in the computation of the track score used for MHT data association.
The paper begins by giving an overview of the features that have been proposed in the literature for distinguishing
humans from other types of targets. These features include radar cross section, target dynamics, and spectral and gait
characteristics. For example, the number of secondary peaks around the main peak corresponding to the mean Doppler
shift is one feature that is sent to the tracker. A large number of secondary peaks will be an indication that the
observation is from an animal, rather than a vehicle. Also, if spectral analysis of the variation in Doppler shift due to
torso motion yields a distinct periodic pattern with a peak at about 2 Hz, this can be used to identify the target as a
human and, along with the target speed, may even be used as a target signature. The manner in which these features are
estimated during signal processing and how this data is included in the track score is described.
A test program conducted to produce data for analysis and development is described. Typical results derived from real
data, collected during this test program, are presented to show how feature data is used to enhance the tracking solution.
These results show that the proposed methods are effective in separating the tracks on dismounts from those formed on
clutter and other objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Providing accurate state estimates of a maneuvering target is an important problem. This problem occurs when tracking
maneuvering boats or even people wandering around. In our earlier paper, a specialized grid-based filter (GBF) was
introduced as an effective method to produce accurate state estimates of a target moving in two dimensions, while
requiring only a two-dimensional grid. The paper showed that this GBF produces accurate state estimates because the
filter can capture the kinematic constraints of the target directly, and thus account for them in the estimation process. In
this paper, the relative performance of a GBF to a Kalman filter is investigated. The state estimates (position and
velocity) from a GBF are compared to those from a Kalman filter, against a maneuvering target. This study will employ
the comparison paradigm presented by Kirubarajan and Bar-Shalom. The paradigm incrementally increases the
maneuverability of a target to determine how the two different track filters compare as the target becomes more
maneuverable. The intent of this study is to determine how maneuverable the target must be to gain the benefit from a
GBF over a Kalman filter. The paper will discuss the target motion model, the GBF implementation, and the Kalman
filter used for the study. Our results show that the GBF outperforms a Kalman filter, especially as the target becomes
more maneuverable. A disadvantage of the GBF is that it is more computational than a Kalman filter. The paper will
discuss the grid and sample sizing needed to obtain quality estimates from a GBF. It will be shown that the sizes are
much smaller than what may be expected and is quite stable over a large range of sizes. Furthermore, this GBF can
exploit parallelization of the computations, making the processing time significantly less.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While standard Kalman-based filters, Gaussian assumptions, and covariance-weighted metrics work remarkably
well in data-rich tracking environments such as air and ground, their use in the data-sparse environment of space
surveillance is more limited. In order to properly characterize non-Gaussian density functions arising in the
problem of long term propagation of state uncertainties in the two-body problem, a framework for a Gaussian
sum filter is described which achieves uncertainty (covariance) consistency and an accurate approximation to
the Fokker-Planck equation up to a prescribed accuracy. The filter is made efficient and practical by (i) using
coordinate systems adapted to the physics (i.e., orbital elements), (ii) only requiring a Gaussian sum to be
defined along one of the six state space dimensions, and (iii) the ability to initially select the component means,
covariances, and weights by way of a lookup table generated by solving an offline nonlinear optimization problem.
The efficacy of the Gaussian sum filter and the improvements over the traditional unscented Kalman filter are
demonstrated within the problems of data association and maneuver detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target
Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood
of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions
of the PHD filter to the multiple model (MM) framework have been published and were implemented
either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple
model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps
of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case
of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Group moving targets are number of targets independently moving in a physical space but keeping their relative order or
pattern invariant. The up to date state-of-the-art multi-target tracking (MTT) data association methods (GNN,JPDA,MHT)
are easily fail on group targets tracking problems, since the tracker-to-observation ambiguity cannot be resolved if only
using the individual track to observation information. A hypergraph G is represented by G = {V,E}, where V is a set of
elements called nodes or vertices, E is a set of non-empty subsets containing d-tuple of vertices called hyperedges. It can
be used as a new mathematic tool to represent a group of moving targets if we let each target be a vertex and a d-target
subset be an hyperedge. Under this representation, this paper reformulates the traditional MTT data association problem as
an hypergraph matching one between the hypergraphs formed from tracks and observations, and shows that the traditional
approach (only uses the vertex-to-vertex information) which is a special case under the proposed framework. In addition
to the vertex-to-vertex information, since the hyperedge-to-hyperegde information is also used in building the assignment
matrix, the hypergraph matching based algorithms give better performance than that from the traditional methods in group
target tracking problems. We demonstrate the declaration from simulations as well as video based geotracking examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data association is the crucial part of any multitarget tracking algorithm in a scenario with multiple closely
spaced targets, low probability of detection and high false alarm rate. Multiframe assignment, which solves the
data association problem as a constrained optimization, is one of the widely accepted methods to handle the
measurement origin uncertainty. If the targets do not maneuver, then multiframe assignment with one or two
frames will be enough to find the correct data association. However, more frames must be considered in the
data association for maneuvering targets. Also, a target maneuver might be hard to detect when maneuvering
index, which is the function of sampling time, is small. In this paper, we propose an improved multiframe
data association with better cost calculation using backward multiple model recursion, which increases the
maneuvering index. The effectiveness of the proposed algorithm is demonstrated with simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show that the flow of particles corresponding to Bayes' rule has a number of striking similarities with the big bang,
including cosmic inflation and cosmic acceleration. We derive a PDE for this flow using a log-homotopy from the prior
probability density to the posteriori probability density. We solve this PDE using the gradient of the solution to
Poisson's equation, which is computed using an exact Green's function and the standard Monte Carlo approximation of
integrals. The resulting flow is analogous to Coulomb's law in electromagnetics. We have used no physics per se to
derive this flow, but rather we have only used Bayes' rule and the definition of normalized probability and a loghomotopy
parameter that could be interpreted as time. The details of this big bang resemble very recent theories much
more closely than the so-called new inflation models, which postulate enormous inflation immediately after the big bang.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking in high clutter or low signal-to-noise environments presents many challenges to tracking systems.
Joint Maximum Likelihood estimator combined with Probabilistic Data Association (JML-PDA) is a well-known
parameter estimation solution for the initialization of tracks of very low observable and low signal-to-noise-ratio
targets in higher clutter environments. On the other hand, the Joint Probabilistic Data Association (JPDA)
algorithm, which is commonly used for track maintenance, lacks automatic track initialization capability. This
paper presents an algorithm to automatically initialize and maintain tracks using an integrated JPDA and
JML-PDA approach that seamlessly shares information on existing tracks between the JML-PDA (used for
initialization) and JPDA (used for maintenance) components. The motivation is to share information between
the maintenance and initialization stages of the tracker, that are always on-going, so as to enable the tracking of
an unknown number of targets using the JPDA approach in heavy clutter. The effectiveness of the new algorithm
is demonstrated on a heavy clutter scenario and its performance is tested on negibouring targets with association
ambiguity using angle-only measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show the results of numerical experiments for tracking ballistic missiles using only angle measurements. We
compare the performance of an extended Kalman filter with a new nonlinear filter using particle flow to compute Bayes'
rule. For certain difficult geometries, the particle flow filter is an order of magnitude more accurate than the EKF.
Angle only tracking is of interest in several different sensors; for example, passive optics and radars in which range and
Doppler data are spoiled by jamming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We define the notion of an "identity variance" for expressing the level of uncertainty between target identities
in a PDF representing the states of multiple targets. This, coupled with an OSPA covariance introduced in
past work, can form a basis for evaluating both the accuracy of the state estimates as well as the confidence
in the identities of the states. A potential application of the identity variance is as a criterion for choosing
between waveforms optimized for producing accurate position estimates versus good classification information
on a multifunction radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive and selective multi-target tracker combines various methods for object detection and tracking such as
feature-based methods and trajectory-based tracking, thus providing meaningful results with variable shape, brightness
and size of the target images. In the case of a high dynamic scene, dynamically varying parameter sets must be used for
the detection of object images and object tracking. This requires an automatic generation of parameter sets by adaptive
adjustment. In this paper, the selected process chains for the automatic and seamless tracking of multiple object images
and for the interpretation of target objects in image sequences are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronically scanned array radars as well as mechanically steered rotating antennas return measurements
with different time stamps during the same scan while sweeping form one region to another. Data association
algorithms process the measurements at the end of the scan in order to satisfy the common one measurement
per track assumption. Data processing at the end of a full scan resulted in delayed target state update. This
issue becomes more apparent while tracking fast moving targets with low scan rate sensors. In this paper, we
present new dynamic sector processing algorithm using 2D assignment for continuously scanning radars. A
complete scan can be divided into sectors, which could be as small as a single detection, depending on the
scanning rate and sparsity of targets. Data association followed by filtering and target state update is done
dynamically while sweeping from one end to another. Along with the benefit of immediate track updates,
continuous tracking results in challenges such as multiple targets spanning multiple sectors and targets crossing
consecutive sectors. Also, associations performed in the current sector may require changes in association done
in previous sectors. Such difficulties are resolved by the proposed 2D assignment algorithm that implements
an incremental Hungarian assignment technique. The algorithm offers flexibility with respect to assignment
variables for fusing of measurements received in consecutive sectors. Furthermore the proposed technique can
be extended to multiframe assignment for jointly processing data from multiple scanning radars. Experimental
results based on rotating radars are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could
be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same
place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in
order to assess the relative performance of an object by running a number of standard tests and trials against it. This
paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and
agencies of the US government. These benchmarks range from missile defense applications to chemical biological
situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with
variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The state of the art of tracking has matured and consequently, the priorities for improved performance and expanded or
new processing capabilities have changed. Future directions in algorithm development in tracking and related data
processing are not easy to predict with accuracy. The future priorities of development tasks predicted in this
presentation are subjective; that is, simply the author's view. While there will continue to be algorithm development to
improve many aspects of tracking, the emphasis is expected to change in favor of expanded or new capabilities. This
paper concentrates on the interactions between the fusion tracker and the other fusion functions. Fusion is expected to
have higher priority for algorithm development than tracking with single sensor data. The interactions between the
fusion tracker and the other fusion functions are expected to be of special interest to achieve advanced fusion
performance. To facilitate this discussion, the categories of the state of the art of tracking are expanded beyond the
previous paper.
Many aspects of single sensor, multiple target tracking have matured during the last 20 years but room for improvement
remains. In contrast, fusion of data from multiple distributed sensors is far less mature and interest is expected to
continue to increase. Many fusion systems pose challenges that are not of much concern in tracking with data from a
single sensor and algorithm development of those aspects of fusion will continue to be needed. The capability of the
many functions and users of the output of trackers needs to be improved and expanded. Consequently, an increase is
expected in the need to improve in the interactions between the fusion tracker and the other fusion functions. Some of
important interactions between the fusion tracker and the other fusion functions of a high performance multiple sensor
fusion system are discussed and a dialogue on this topic is encouraged.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we applied several data compression techniques to simulated data and the Turbofan engine degradation
simulation data set from NASA, with the goal of comparing their performance when coupled with the
Support Vector Machine (SVM) classifier and the SVM regression (SVR) predictor. We consistently attained
correct rates in the neighborhood of 90% for simulated data set, with the Principal Component Analysis (PCA),
Sparse Reconstruction by Separable Approximation (SpaRSA) and Partial Least Squares (PLS) having a slight
edge over the other data reduction methods for data classification. We achieved 22% error rate with SRM for
the Turbofan data set 1 and 40% error rate with PCA for Turbofan data set 2. Throughout the tests we have
performed, PCA proved to be the best data reduction method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile sensor networks (MSNs) have wide applications such as military target detection and tracking, detection of
toxic chemicals in contaminated environments, and search and rescues after disasters, etc. In many applications,
a core problem is to conduct cooperative scalar field mapping (or searching) over a large area of interest.
Centralized solutions to the scalar field mapping may not fit for large mobile sensor network due to the
single-point-of-failure problem and the limited scalability. In this paper, autonomous mobile sensor networks are
deployed to map a scalar field in a cooperative and distributed fashion. We develop a cooperative sensor fusion
algorithm based on distributed consensus filters. In this algorithm each agent receives measurements from its
neighboring agents within its communication range, and iteratively updates the estimate of the unknown scalar
field and an associated confidence map. A motion planning algorithm is used to obtain a path for complete
coverage of the field of interest. A distributed flocking control algorithm is adopted to drive the center of the
mobile sensor network to track the desired paths. Computer simulations are conducted to validate the proposed
algorithms. We evaluate the mapping performance by comparing it with a centralized mapping algorithm. Such
a cooperative sensing approach can be used in many military surveillance applications where targets may be
small and elusive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been interest in overhead tracking of automobiles on our roadways using optical sensors. Tracking of multiple
vehicles can be accomplished with a single band high-resolution sensor as long as the vehicles are continuously in view.
However, in many cases the vehicles pass through or behind blackouts, such as through tunnels or behind tall buildings.
In these cases, the vehicles of interest must be reacquired and recognized from the collection of vehicles present after the
blackout. The approach considered here is to add an additional sensor to assist a single band high-resolution tracking
sensor, where the adjunct sensor measures the vehicle signatures for recognition and reacquisition. The subject of this
paper is the recognition of targets of interest amongst the observed objects and the reacquisition after a blackout. A
Generalized Likelihood Ratio Test (GLRT) algorithm is compared with the Spectral Angle Mapper (SAM) and Euclidian
distance algorithms. All three algorithms were evaluated on a database of signatures created by measuring samples from
old automobile gas doors. The GLRT was the most successful in recognizing the target after a blackout and could
achieve a 95% correct reacquisition rate. The results show the feasibility of using a hyper spectral sensor to assist a multi
target tracking sensor by providing target recognition for reacquisition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Experimental remote sensing data from the 8 to 12 μm wavelength NASA Thermal Infrared Multispectral Scanner
(TIMS) have been a valuable resource for multispectral algorithm proof-of-concept, a prime example being a Constant
False Alarm Rate (CFAR) spectral small target detector founded on maximum likelihood theory; CFAR tests on low
signal-to-clutter ratio rural Australian TIMS imagery yielded a detection rate of 5 out of 7 (71%) for small extended
targets, e.g. buildings ~ 10 meters in extent, at a 10-6 false alarm rate. Separately, techniques such as Independent
Component Analysis (ICA) have since shown good promise for small target detection as well as terrain feature
extraction. In this study, we first provide higher-confidence CFAR performance estimates by incorporating a larger set
of imagery including ASTER satellite multi-band imagery and ground truth. Secondly, alongside CFAR we perform
ICA, which effectively separates many non-natural features from the highly cluttered natural terrain background; in
particular, our TIMS results show that a surprisingly small subset of ICA components contain the majority of nonnatural
"signal" such as paved roads amid the clutter of soil, rock, and vegetation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a compact, low-cost and open mobile sensor platform consisting of multiple ASCCbots for networked
surveillance is presented. This platform is based on commercial off-the-shelf components and open source
software. Compared to existing research platforms, our platform is reliable, reconfigurable and easy to duplicate.
We develop novel algorithms for object detection on ASCCbot. Due to the distributed computing nature of
the platform, we also conduct collaborative target localization, which is realized by fusing the data from omnidirectional
camera, laser range finder and sensor pose estimation. The performance of the mobile surveillance
system is evaluated through experiments. The results from the experiments prove that the proposed platform is
a promising tool for networked surveillance research and practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study the performance of the multipath-assisted multitarget tracking using multiframe assignment
for initiating and tracking multiple targets by employing one or more transmitters and receivers. The basis
of the technique is to use the posterior Cramer-Rao lower bound (PCRLB) to quantify the optimal achievable
accuracy of target state estimation. When resolved multipath signals are present at the sensors, if proper measures
are not taken, multiple tracks will be formed for a single target. In typical radar systems, these spurious
tracks are removed from tracking, and therefore the information carried in such target return tracks are wasted.
In multipath environment, in every scan the number of sensor measurements from a target is equal to the number
of resolved signals received by different propagation modes. The data association becomes more complex as this
is in contrary to the standard data association problem whereas the total number of sensor measurements from
a target is equal to at most one. This leads to a challenging problem of fusing the direct and multipath measurements
from the same target. We showed in our evaluations that incorporating multipath information improves
the performance of the algorithm significantly in terms of estimation error. Simulation results are presented to
show the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are stringent false alarm probability demands on laser radar systems systems, although their operation is often
accompanied by a complex target environment, when the signal-to-noise ratio is low. New signal processing technique
for laser radar systems is suggested. The technique provides detection of a backscattered signal from a target during an
interval between a receiver noise bursts A pulse shoot is matched with a trailing edge of a noise burst, and the signal
presence decision is made according to leading edge of the next burst. There is a contradiction between the impulse
frequency and false alarm probability demands, that is why double-threshold processing is offered. The lower level
induces outpulsing while the higher one determines target detection performance. Since duration of such sophisticated
time intervals is random, statistic analysis was made via numerical model. The technique is aimed at providing low false
alarm probability and energy efficiency of the system at the same time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.