PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6567, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
The neural extended Kalman filter is an adaptive estimation technique that has been shown to learn on-line the
maneuver model of the trajectory of a target. This improved motion model can be used to better predict the location of
a target at given point in time, especially when the target, such as a mortar shell, has limited maneuvering capabilities.
In this paper, the neural extended Kalman filter is used to predict, with multiple-sensor-systems provided measurement
reports, impact point and impact time of a ballistic-like projectile when the drag on the shell was not accurately modeled
in the motion model. In previous work, the neural extended Kalman filter was shown to work well with a single sensor
with a uniform sample rate. Multiple sensors can incorporate two major differences into the problem. The first
difference is that of the multiple aspect angles and uncertainty that are used in the model adaptation. The second
difference is that of a non-uniform update rate of the measurements to the tracking system. While most tracking
systems can easily handle this difference, the adaptation of the neural network training parameters can be deleteriously
affected by these variations. The first of these two differences, potential concerns to the neural extended Kalman filter's
implementation, is investigated in this effort. In this effort, performance of this adaptive and predictive scheme with
multiple sensors in a three dimensional space is shown to provide a quality impact estimate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Highly accurate predictions of tracking performance usually require high fidelity Monte Carlo simulations that entail
significant implementation time, run time, and complexity. In this paper we consider the use of Markov Chains as a
simpler alternative that models critical aspects of the tracking process and provides reasonable estimates of tracking
performance, while maintaining much lower cost and complexity. We describe a general procedure for Markov-Chain
based performance prediction, and illustrate the use of this procedure in the context of an airborne system that employs
a steerable EO/IR sensor to track single targets or multiple targets in non-overlapping fields of view. We discuss the
effects of key model parameters, including measurement sampling rates, track termination, target occlusions, and
missed detections. We also present plots of performance as a function of occlusion probability and target recognition
probability that exemplify the use of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, two new solutions to the localization of an emitter using time difference of arrival (TDOA)
measurements are proposed. The maximum likelihood estimation for this problem will result in a nonlinear and
nonconvex optimization problem, which is very difficult to solve. The solutions presented in this paper consider
an alternate formulation, which is based on the sensor-emitter geometry. This formulation results in quadratic
(however, nonconvex) optimization problem.
The first solution relaxes the original optimization problem into a semidefinite program (SDP). Using the
solution to this relaxed SDP, emitter is localized using a randomization technique. The second solution forms
the Lagrangian dual of the original problem, and it is shown that the dual problem is an SDP. From the solution
to the dual problem a solution to the original problem is found. It has to be noted that the solution obtained
using the optimal dual variable, is optimal to the original problem only if strong duality holds. This has not
been proven in this paper analytically. Extensive simulations performed suggests that the strong duality may
hold for this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considerable interest has arisen in the recent years utilizing inexpensive acoustic sensors in the battlefield to perform targets of interest identification and classification. There are many advantages of using acoustic sensor arrays. They are low cost, and relatively have low power consumption. They require no line of sight and provide many capabilities for target detection, bearing estimation, target tracking, classification and identification. Furthermore, they can provide cueing for other sensors and multiple acoustic sensor responses can be combined and triangulated to localize an energy source target in the field. In practice, however, many environment noise factors affect their performance in detecting targets of interest reliably and accurate. In this paper, we have proposed a novel approach for detection, classification, and identification of moving target vehicles. The approach is based on Singular Value Decomposition (SVD) coupled with Particle Filtering (PF) technique. Using SVD dominant features of vehicle acoustic signatures are extracted efficiently. Then, these feature vectors are employed for robust identification and classification of target vehicles based on a particle filtering scheme. The performance of the proposed approach was evaluated based on a set of experimental acoustic data from multiple vehicle test-runs. It is demonstrated that the approach yields very promising results where an array of acoustic sensors are used to detect, identify and classify target vehicles in the field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integration of electro-optical and radar generated tracks is critical for identifying accurate time and space position information in target tracking and providing a single integrated picture (SIP) of the dynamic situation. This paper proposes a new, robust, real-time algorithm to (i) correctly correlate data from several sensors and the existing system track, (ii) improve target tracking accuracy and (iii) identify when the data represent new tracks. The proposed algorithm uses metric data, linear, and area features extracted from optical and radar images. The major novelty of the algorithm is in use of robust and affine invariant structural relations built on the features for accurate correlation. These features are combined with intelligent adaptation of Kalman filter using Neural Networks. A proposed measure of confidence with the correlation decision is based on both structural and metric similarities of tracks to estimate both bias and random errors. The similarities are based on concepts from the abstract algebraic systems, generalized Gauss-Markov stochastic processes, and Kalman filters for n-dimensional time series that explicitly model measurement dependence on k previous measurements, M(t/t-1,t-2,...,t-k). These techniques are naturally combined with the hierarchical matching approach to increase the overall track accuracy. The proposed approach and algorithm for track correlation/matching is suitable for both centralized and distributed computing architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the issue of tracking partially occluded targets in videos recorded by moving cameras of
either handhold or airborne. We propose a fast geometric constraint global motion algorithm to reduce the
computation overhead dramatically and the effect caused by outliers from moving targets. A recursive least-squares
filter with forgetting factor is utilized to filter out disturbances and to provide a better estimation of
the target's position in the current frame as well as the prediction of the position and velocity for the next
frame. The filter uses the affine model and the primary search result to construct a kinetic model. After that,
a compact search region is formed based on the prediction to reduce mismatch and improve computation speed.
The adaptive template matching is applied to improve the performance further. With these important steps, a
tracking algorithm is developed and tested on real video sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
A fuzzy logic resource manager that enables a collection of unmanned aerial vehicles (UAVs) to automatically cooperate
to make meteorological measurements will be discussed. Once in flight no human intervention is required. Planning
and real-time control algorithms determine the optimal trajectory and points each UAV will sample, while taking into
account the UAVs' risk, risk tolerance, reliability, mission priority, fuel limitations, mission cost, and related
uncertainties. The control algorithm permits newly obtained information about weather and other events to be
introduced to allow the UAVs to be more effective. The approach is illustrated by a discussion of the fuzzy decision tree
for UAV path assignment and related simulation. The different fuzzy membership functions on the tree are described in
mathematical detail. The different methods by which this tree is obtained are summarized including a method based on
using a genetic program as a data mining function. A second fuzzy decision tree that allows the UAVs to automatically
collaborate without human intervention is discussed. This tree permits three different types of collaborative behavior
between the UAVs. Simulations illustrating how the tree allows the different types of collaboration to be automated are
provided. Simulations also show the ability of the control algorithm to allow UAVs to effectively cooperate to increase
the UAV team's likelihood of success.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an integrated approach to sensor fusion and resource management applicable to sensor networks.
The sensor fusion and tracking algorithm is based on the theory of random sets. Tracking is herein considered to be the
estimation of parameters in a state space such that for a given target certain components, e.g., position and velocity, are
time varying and other components, e.g., identifying features, are stationary. The fusion algorithm provides at each
time step the posterior probability density function, known as the global density, on the state space, and the control
algorithm identifies the set of sensors that should be used at the next time step in order to minimize, subject to
constraints, an approximation of the expected entropy of the global density. The random set approach to target tracking
models association ambiguity by statistically weighing all possible hypotheses and associations. Computational
complexity is managed by approximating the posterior Global Density using a Gaussian mixture density and using an
approach based on the Kulbach-Leibler metric to limit the number of components in the Gaussian mixture
representation. A closed form approximation of the expected entropy of the global density, expressed as a Gaussian
mixture density, at the next time step for a given set of proposed measurements is developed. Optimal sensor selection
involves a search over subsets of sensors, and the computational complexity of this search is managed by employing the
Mobius transformation. Field and simulated data from a sensor network comprised of multiple range radars, and
acoustic arrays, that measure angle of arrival, are used to demonstrate the approach to sensor fusion and resource
management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's battlefield environment contains a large number of sensors, and sensor types, onboard multiple platforms. The
set of sensor types includes SAR, EO/IR, GMTI, AMTI, HSI, MSI, and video, and for each sensor type there may be
multiple sensing modalities to select from. In an attempt to maximize sensor performance, today's sensors employ either
static tasking approaches or require an operator to manually change sensor tasking operations. In a highly dynamic
environment this leads to a situation whereby the sensors become less effective as the sensing environments deviates
from the assumed conditions.
Through a Phase I SBIR effort we developed a system architecture and a common tasking approach for solving the
sensor tasking problem for a multiple sensor mix. As part of our sensor tasking effort we developed a genetic algorithm
based task scheduling approach and demonstrated the ability to automatically task and schedule sensors in an end-to-end
closed loop simulation. Our approach allows for multiple sensors as well as system and sensor constraints. This provides
a solid foundation for our future efforts including incorporation of other sensor types.
This paper will describe our approach for scheduling using genetic algorithms to solve the sensor tasking problem in the
presence of resource constraints and required task linkage. We will conclude with a discussion of results for a sample
problem and of the path forward.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of sensor resource management (SRM) is to allocate resources appropriately in order to gain as much
information as possible about a system. In our previous paper, we introduced a centralized non-myopic planning
algorithm, C-SPLAN, that uses sparse sampling to estimate the value of resource assignments. Sparse sampling is related
to Monte Carlo simulation. In the SRM problem we consider, our network of sensors observes a set of tracks; each
sensor can be set to operate in one of several modes and/or viewing geometries. Each mode incurs a different cost and
provides different information about the tracks. Each track has a kinematic state and is of a certain class; the sensors can
observe either or both of these, depending on their mode of operation. The goal is to maximize the overall rate of
information gain, i.e. rate of improvement in kinematic tracking and classification accuracy of all tracks in the Area of
Interest. We compared C-SPLAN's performance on several tracking and target identification problems to that of other
algorithms. In this paper we extend our approach to a distributed framework and present the D-SPLAN algorithm. We
compare the performance as well as computational and communications costs of C-SPLAN and D-SPLAN as well as
near-term planners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of flexible sensors offers new opportunities for enhanced tracking performance. The management
of such sensors must consider their characteristics and the tracking situation picture. Shannon's information
measure provides a means of quantifying the potential gains from various sensor deployment options.
The problem of tracking targets with an electronically scanned array radar is addressed. The radar can
be commanded to conduct surveillance of the search volume in a manner that is analogous to a mechanically
scanned radar. In conjunction with the surveillance mode, a revisit mode permits the radar to be commanded
to form beams to illuminate a specific volume, such as in the vicinity of a track. An expected information
gain is computed based on the probability of detecting the tracked target with the revisit beam, the predicted
track uncertainty in the event that the target is not detected, and the expected fused track uncertainty if the
target is detected. A high value for the expected information gain occurs when a measurement is likely to yield
a significant improvement in the track uncertainty and there is a sufficiently high probability of detecting the
target being tracked. Results from implementing the expected information gain in an integrated tracking and
radar management system are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The nonlinear operation of sensor managers and their non-stationary stochastic environment require the use of simulation
techniques to quantify and verify their behavior. This is particularly evident when comparing different approaches to
sensor management. It is important to consider which performance metrics are of greatest interest and are useful to the
evaluation and comparison of competing designs. The environment, sensors, tracking, and fusion simulation must all be
unbiased in order to provide an even playing field for evaluating and comparing alternative approaches to sensor
management while still having sufficient fidelity to be useful and conclusive. This paper discusses a distributed
simulation environment for the evaluation of an information-based sensor management system developed to detect,
identify, and track targets. The difficulties and solutions to the need for a top level design, accurate data, computational
complexity, required storage, and inter-modular communications are also examined. Much of the simulation has been
written in Matlab under Linux though the design ideas do not necessarily preclude other environments. The paper
concludes with a preliminary comparison of the performance of a conventional rule based sensor management system
with an information based sensor management system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past several years, the military has grown increasingly reliant upon the use of unattended aerial vehicles
(UAVs) for surveillance missions. There is an increasing trend towards fielding swarms of UAVs operating as
large-scale sensor networks in the air. Such systems tend to be used primarily for the purpose of acquiring sensory
data with the goal of automatic detection, identification, and tracking objects of interest. These trends have
been paralleled by advances in both distributed detection, image/signal processing and data fusion techniques.
Furthermore, swarmed UAV systems must operate under severe constraints on environmental conditions and
sensor limitations. In this work, we investigate the effects of environmental conditions on target detection and
recognition performance in a UAV network. We assume that each UAV is equipped with an optical camera, and
use a realistic computer simulation to generate synthetic images. The detection algorithm relies on Haar-based
features while the automatic target recognition (ATR) algorithm relies on Bessel K features. The performance of
both algorithms is evaluated using simulated images that closely mimic data acquired in a UAV network under
realistic environmental conditions. We design several fusion techniques and analyze both the case of a single
observation and the case of multiple observations of the same target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real time object detection is still a challenging computer vision problem in uncontrolled
environments. Unlike traditional classification problems, where the training data can properly
describe the statistical models, it is much harder to discriminate certain object class from rest of
the world with limited negative training samples. Due to the large variation of negatives,
sometimes the intra-object class difference may be even larger than the difference between
objects and non-objects. Besides this, there are many other problems that obstruct object
detection, such as pose variation, illumination variation and occlusion.
Previous studies also demonstrated that infrared (IR) imagery provides a promising
alternative to visible imagery. Detectors using IR imagery are robust to illumination variations
and able to detect object under all lighting conditions including total darkness, where detectors
based on visible imagery generally fail. However, IR imagery has several drawbacks, while
visible imagery is more robust to the situations where IR fails. This suggests a better detection
system by fusing the information from both visible and IR imagery.
Moreover, the object detector needs exhaustive search in both spatial and scale domain,
which inevitably lead to high computation load.
In this paper, we propose to use boosting based vehicle detection in both infrared and visible
imagery. Final decision will be a combination of detection results from both the IR and visible
images. Experiments are carried out using ATR helmet device with both EO and IR sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach
is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for
ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data
when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques
project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal
Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While
some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not
necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size
problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information
within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each
subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to
determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we
study the efficacy of higher order statistical information (using average mutual information) for a bottom up band
grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with
various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all
classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using
hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target
recognition accuracies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work has suggested that target shadows in synthetic aperture radar (SAR) images can be used effectively to aid in target classification. The method outlined in this paper has four
steps - segmentation, representation, modeling, and selection. Segmentation is the process by which a smooth, background-free representation of the target's shadow is extracted from an image chip. A chain code technique is then used to represent the shadow boundary. Hidden Markov modeling is applied to sets of chain codes for multiple targets to create a suitable bank of target representations. Finally, an ensemble framework is proposed for classification. The proposed model selection process searches for an optimal ensemble of models based on various target model configurations. A five target subset of the MSTAR database is used for testing. Since the shadow is a back-projection of the target profile, some aspect angles will contain more discriminatory information then others. Therefore, performance is investigated as a function of aspect angle. Additionally, the case of multiple target looks is considered. The capability of the shadow-only classifier to enhance more traditional classification techniques is examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar offers unique advantages over other sensors, such as visual or seismic sensors, for human target detection.
Many situations, especially military applications, prevent the placement of video cameras or implantment seismic
sensors in the area being observed, because of security or other threats. However, radar can operate far away
from potential targets, and functions during daytime as well as nighttime, in virtually all weather conditions. In
this paper, we examine the problem of human target detection and identification using single-channel, airborne,
synthetic aperture radar (SAR). Human targets are differentiated from other detected slow-moving targets by
analyzing the spectrogram of each potential target. Human spectrograms are unique, and can be used not
just to identify targets as human, but also to determine features about the human target being observed, such
as size, gender, action, and speed. A 12-point human model, together with kinematic equations of motion
for each body part, is used to calculate the expected target return and spectrogram. A MATLAB simulation
environment is developed including ground clutter, human and non-human targets for the testing of spectrogram-based
detection and identification algorithms. Simulations show that spectrograms have some ability to detect
and identify human targets in low noise. An example gender discrimination system correctly detected 83.97%
of males and 91.11% of females. The problems and limitations of spectrogram-based methods in high clutter
environments are discussed. The SNR loss inherent to spectrogram-based methods is quantified. An alternate
detection and identification method that will be used as a basis for future work is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Earlier, we reported on predictive anomaly detection (PAD) for nominating targets within data streams generated by
persistent sensing and surveillance. This technique is purely temporal and does not directly depend on the physics
attendant on the sensed environment. Since PAD adapts to evolving data streams, there are no determinacy assumptions.
We showed PAD to be general across sensor types, demonstrating it using synthetic chaotic data and in audio, visual,
and infrared applications. Defense-oriented demonstrations included explosions, muzzle flashes, and missile and aircraft
detection. Experiments were ground-based and air-to-air.
As new sensors come on line, PAD offers immediate data filtering and target nomination. Its results can be taken
individually, pixel by pixel, for spectral analysis and material detection/identification. They can also be grouped for
shape analysis, target identification, and track development. PAD analyses reduce data volume by around 95%,
depending on target number and size, while still retaining all target indicators.
While PAD's code is simple when compared to physics codes, PAD tends to build a huge model. A PAD model for 512
x 640 frames may contain 19,660,800 Gaussian basis functions. (PAD models grow linearly with the number of pixels
and the frequency content, in the FFT sense, of the sensed scenario's background data). PAD's complexity in terms of
computational and data intensity is an example of what one sees in new algorithms now in the R&D pipeline, especially
as DoD seeks capability that runs fully automatic, with little to no human interaction.
Work is needed to improve algorithms' throughput while employing existing infrastructure, yet allowing for growth in
the types of hardware employed. In this present paper, we discuss a generic cluster interface for legacy codes that can be
partitioned at the data level. The discussion's foundation is the growth of PAD models to accommodate a particular
scenario and the need to reduce false alarms while preserving all targets. The discussion closes with a view of future
software and hardware opportunities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach for processing sonar signals with the ultimate goal of ocean bottom sediment classification and
underwater buried target classification is presented in this paper. Work reported for sediment classification is
based on sonar data collected by one of the AN/AQS-20's sonars. Synthetic data, simulating data acquired by
parametric sonar, is employed for target classification. The technique is based on the Fractional Fourier Transform
(FrFT), which is better suited for sonar applications because FrFT uses linear chirps as basis functions. In the
first stage of the algorithm, FrFT requires finding the optimum order of the transform that can be estimated based
on the properties of the transmitted signal. Then, the magnitude of the Fractional Fourier transform for optimal
order applied to the backscattered signal is computed in order to approximate the magnitude of the bottom
impulse response. Joint time-frequency representations of the signal offer the possibility to determine the timefrequency
configuration of the signal as its characteristic features for classification purposes. The classification
is based on singular value decomposition of the time-frequency distributions applied to the impulse response.
A set of the largest singular values provides the discriminant features in a reduced dimensional space. Various
discriminant functions are employed and the performance of the classifiers is evaluated. Of particular interest
for underwater under-sediment classification applications are long targets such as cables of various diameters,
which need to be identified as different from other strong reflectors or point targets. Synthetic test data are
used to exemplify and evaluate the proposed technique for target classification. The synthetic data simulates
the impulse response of cylindrical targets buried in the seafloor sediments. Results are presented that illustrate
the processing procedure. An important characteristic of this method is that good classification accuracy of an
unknown target is achieved having only the response of a known target in the free field. The algorithm shows an
accurate way to classify buried objects under various scenarios, with high probability of correct classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, super-resolution reconstruction (SRR) method of
low-dimensional face subspaces has been proposed for
face recognition. However, the reconstructed features obtained from the face-specific super-resolution subspace
contain no class information. This paper proposes a novel method for super-resolution reconstruction of class specific features that aims on improving the discriminant power of the recognition systems. Our experimental results on Yale and ORL face databases are very encouraging. Furthermore, the performance of our proposed
approach on the MSTAR database is also tested for preliminary evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This invited panel discussion "Issues and challenges in uncertainty representation and management with applications to real-world problems" includes viewgraphs and presentation papers on these topics--Research challenges: dependence issues in feature/declaration
data fusion; Conceptual and methodological issues
in evidential reasoning; The uncertainty and knowledge challenge in distributed systems: an information fusion standpoint; Statistical modeling and management of uncertainty: a position paper; On conditioning in the Dempster-Shafer context; Dempster-Shafer theory made tractable and stable; and Collaborative distributed data fusion architecture using multi-level Markov decision processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
The probability hypothesis density (PHD) filter has attracted increasing interest since the author first introduced it in 2000. Potentially practical computational implementations of this filter have been devised, based on sequential Monte Carlo or on Gaussian mixture techniques. Research groups in at least a dozen different nations are investigating the PHD filter and its generalization, the CPHD filter, for use in various applications. Some of this work suggests that these filters may, under certain circumstances, outperform conventional multitarget filters such as MHT and JPDA. This paper summarizes these research efforts and their findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayesian target detection, tracking, and identification is based on the recursive Bayes filter and its generalizations.
This filter requires that measurements be transformed into likelihood values. Conventional likelihoods
model the randomness of conventional measurements. Other measurement types involve not only randomness but
also imprecision, vagueness, uncertainty, and contingency. Conventional measurements and target states are also
mediated by precise, deterministic models. But in general these models can also involve imprecision, vagueness, or
uncertainty. This paper describes three major types of generalized measurements and their associated generalized
likelihood functions. If measurements are "UGA measurements" then fuzzy, Dempster-Shafer, and rule-based
measurement fusion can be rigorously reformulated as special cases of Bayes' rule.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A theoretical formulation for mission based sensor management and
information fusion using advanced tools of probability theory and stochastic
processes is presented.
We apply Bayes' Belief Network methods to fuse features and determine
a tactical significant function which is used by the sensor management objective
function. The estimated multi-sensor multi-target posterior that results
reflects tactical significant, and is used to determine the course of action
for the given mission. We demonstrate the performance of the algorithm using the simple mission of
reaching a pre-specified location while avoiding threatening targets, and
discuss the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor management for space situational awareness presents a daunting theoretical and practical challenge as
it requires the use of multiple types of sensors on a variety of platforms to ensure that the space environment is
continuously monitored. We demonstrate a new approach utilizing the Posterior Expected Number of Targets (PENT)
as the sensor management objective function, an observation model for a space-based EO/IR sensor platform, and a
Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geostationary
Satellites are presented. We also demonstrate enhanced performance by applying the ProgressiveWeighting Correction
(PWC) method for regularization in the implementation of the PHD-PF tracker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under a United States Army Small Business Technology Transfer (STTR) project, we have developed a MATLAB toolbox called PFLib to facilitate the exploration, learning and use of Particle Filters by a general user. This paper describes its object oriented design and programming interface. The software is available under a GNU
GPL license.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
Decision-level fusion is an appealing extension to automatic/assisted target recognition (ATR) as it is a low-bandwidth
technique bolstered by a strong theoretical foundation that requires no modification of the source
algorithms. Despite the relative simplicity of decision-level fusion, there are many options for fusion application
and fusion algorithm specifications. This paper describes a tool that allows trade studies and optimizations
across these many options, by feeding an actual fusion algorithm via models of the system environment. Models
and fusion algorithms can be specified and then exercised many times, with accumulated results used to compute
performance metrics such as probability of correct identification. Performance differences between the best of
the contributing sources and the fused result constitute examples of "gain." The tool, constructed as part of the
Fusion for Identifying Targets Experiment (FITE) within the Air Force Research Laboratory (AFRL) Sensors
Directorate ATR Thrust, finds its main use in examining the relationships among conditions affecting the target,
prior information, fusion algorithm complexity, and fusion gain. ATR as an unsolved problem provides the
main challenges to fusion in its high cost and relative scarcity of training data, its variability in application, the
inability to produce truly random samples, and its sensitivity to context. This paper summarizes the mathematics
underlying decision-level fusion in the ATR domain and describes a MATLAB-based architecture for exploring
the trade space thus defined. Specific dimensions within this trade space are delineated, providing the raw
material necessary to define experiments suitable for multi-look and multi-sensor ATR systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US Air Force Research Laboratory (AFRL) is exploring the decision-level fusion (DLF) trade space in the Fusion
for Identifying Targets Experiment (FITE) program. FITE is surveying past DLF approaches and experiments. This
paper reports preliminary findings from that survey, which ultimately plans to place the various studies in a common
framework, identify trends, and make recommendations on the additional studies that would best inform the trade space
of how to fuse ATR products and how ATR products should be improved to support fusion. We tentatively conclude
that DLF is better at rejecting incorrect decisions than in adding correct decisions, a larger ATR library is better (for a
constant Pid), a better source ATR has many mild attractors rather than a few large attractors, and fusion will be more
beneficial when there are no dominant sources. Dependencies between the sources diminish performance, even when
that dependency is well modeled. However, poor models of dependencies do not significantly further diminish
performance. Distributed fusion is not driven by performance, so centralized fusion is an appropriate focus for FITE.
For multi-ATR fusion, the degree of improvement may depend on the participating ATRs having different OC
sensitivities. The machine learning literature is an especially rich source for the impact of imperfect (learned in their
case) models. Finally and perhaps most significantly, even with perfect models and independence, the DLF gain may be
quite modest and it may be fairly easy to check whether the best possible performance is good enough for a given
application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Receiver Operating Characteristic (ROC) curve can be used to quantify the performance of Automatic Target Recognition
(ATR) systems. When multiple classification systems are fused, the assumption of independence is usually made
in order to mathematically combine the individual ROC curves for each of these classification systems into one fused
ROC curve. However, correlation may exist between the classification systems and the outcomes used to generate each
ROC curve. This paper will demonstrate a method for creating a ROC curve of the fused classification systems which
incorporates the correlation that exists between the individual classification systems. Specifically, we will use the derived
covariance between multiple classification systems to compute the existing correlation and thus the level of dependence
between pairs of classification systems. Then, given a fusion rule, two systems, and the correlation between them, the ROC
curve for the fused system is produced. We generate the formula for the Boolean OR and AND rules, giving the resultant
ROC curve for the fused system. This paper extends our previous work in which bounds for the ROC curve of the fused,
correlated classification systems were presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new cascaded fuzzy classifier (CFC) is proposed to implement
ground-moving targets classification tasks locally at
sensor nodes in wireless sensor networks (WSN). The CFC is composed of three and two binary fuzzy classifiers (BFC)
respectively in seismic and acoustic signal channel in order to classify person, Light-wheeled (LW) Vehicle, and Heavywheeled
(HW) Vehicle in presence of environmental background noise. Base on the CFC, a new basic belief assignment
(bba) function is defined for each component BFC to give out a piece of evidence instead of a hard decision label. An
evidence generator is used to synthesize available evidences from BFCs into channel evidences and channel evidences
are further temporal-fused. Finally, acoustic-seismic modality fusion using Dempster-Shafer method is performed. Our
implementation gives significantly better performance than the implementation with majority-voting fusion method
through leave-one-out experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
We assess the impact of supplementing two-dimensional video with three-dimensional geometry for persistent vehicle
tracking in complex urban environments. Using recent video data collected over a city with minimal terrain content, we
first quantify erroneous sources of automated tracking termination and identify those which could be ameliorated by
detailed height maps. They include imagery misregistration, roadway occlusion and vehicle deceleration. We next
develop mathematical models to analyze the tracking value of spatial geometry knowledge in general and high resolution
ladar imagery in particular. Simulation results demonstrate how 3D information could eliminate large numbers of false
tracks passing through impenetrable structures. Spurious track rejection would permit Kalman filter coasting times to be
significantly increased. Track lifetimes for vehicles occluded by trees and buildings as well as for cars slowing down at
corners and intersections could consequently be prolonged. We find high resolution 3D imagery can ideally yield an
83% reduction in the rate of automated tracking failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor networks are emplaced throughout the world to remotely track activity. Typically, these sensors report data such
as target direction or target classification. This information is reported to a personnel-based monitor or a command and
control center. The ideal sensor system will have a long mission life capability and will report information-rich
actionable intelligence with high data integrity at near real-time latency. This paper discusses a multi-layered approach
that includes data fusion at the Sensor Node, Sensor Field, and Command and Control Center Layer to create cohesive
reports that mitigate false alarms and multiple reports of the same target while providing accurate tracking data on a
situational awareness level. This approach is influenced by low-power architecture, and designed to maximize information density and reduce flooding of sensor networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical RF network opens a new era for high precision measurements. The present paper
discusses its application on target detection and tracking with a precision which could not be achieved by
current radar systems. Such precision is needed for future air traffic control and for success of national
missile defense systems. Optical RF network can also used to monitor earthquake, and safety of dams,
bridges, and buildings, as well as for accurate attitude determination of moving platforms like satellites,
ships, aircraft, helicopters, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications IV
Resource management (or process refinement) is critical for information fusion operations in that users, sensors, and
platforms need to be informed, based on mission needs, on how to collect, process, and exploit data. To meet these
growing concerns, a panel session was conducted at the International Society of Information Fusion Conference in 2006
to discuss the various issues surrounding the interaction of Resource Management with Level 2/3 Situation and Threat
Assessment. This paper briefly consolidates the discussion of the invited panel panelists. The common themes include:
(1) Addressing the user in system management, sensor control, and knowledge based information collection
(2) Determining a standard set of fusion metrics for optimization and evaluation based on the application
(3) Allowing dynamic and adaptive updating to deliver timely information needs and information rates
(4) Optimizing the joint objective functions at all information fusion levels based on decision-theoretic analysis
(5) Providing constraints from distributed resource mission planning and scheduling; and
(6) Defining L2/3 situation entity definitions for knowledge discovery, modeling, and information projection
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prediction of adversarial course of actions (COA) is critical to many applications including: crime prediction, Unmanned
Aerial Vehicle (UAV) threat prediction, and terrorism attack prevention. Researchers have shown that integrating
behavior features (or preferences/patterns/modes) into prediction systems, which utilize random process theory and
likelihood estimation calculations, can improve prediction accuracy. However, these calculations currently assume
behavior features that are static and will not change during a long time horizon, which make such models difficult to
adapt to adversary behavior feature changes. This paper provides an approach for dynamically predicting changes of
behavior features utilizing the tenets of game theory. An example scenario and extensive simulations illustrate the
feature prediction capability of this model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the dynamic environment of an Air Operations Center (AOC), effective decision-making is highly dependent on
timely and accurate situation assessment. In previous research efforts the capabilities and potential of a Bayesian belief
network (BN) model-based approach to support situation assessment have been demonstrated. In our own prior research,
we have presented and formalized a hybrid process for situation assessment model development that seeks to ameliorate
specific concerns and drawbacks associated with using a BN-based model construct. Specifically, our hybrid
methodology addresses the significant knowledge acquisition requirements and the associated subjective nature of using
subject matter experts (SMEs) for model development. Our methodology consists of two distinct functional elements: an
off-line mechanism for rapid construction of a Bayesian belief network (BN) library of situation assessment models
tailored to different situations and derived from knowledge elicitation with SMEs; and an on-line machine-learning-based
mechanism to learn, tune, or adapt BN model parameters and structure. The adaptation supports the ability to
adjust the models over time to respond to novel situations not initially available or anticipated during initial model
construction, thus ensuring that the models continue to meet the dynamic requirements of performing the situation
assessment function within dynamic application environments such as an AOC. In this paper, we apply and demonstrate
the hybrid approach within the specific context of an AOC-based air campaign monitoring scenario. We detail both the
initial knowledge elicitation and subsequent machine learning phases of the model development process, as well as
demonstrate model performance within an operational context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The extraction of 3D building geometric information from
high-resolution electro-optical imagery is becoming a key
element in numerous geospatial applications. Indeed, producing 3D urban models is a requirement for a variety of
applications such as spatial analysis of urban design, military simulation, and site monitoring of a particular geographic
location. However, almost all operational approaches developed over the years for 3D building reconstruction are semiautomated
ones, where a skilled human operator is involved in the 3D geometry modeling of building instances, which
results in a time-consuming process. Furthermore, such approaches usually require stereo image pairs, image sequences,
or laser scanning of a specific geographic location to extract the 3D models from the imagery. Finally, with current
techniques, the 3D geometric modeling phase may be characterized by the extraction of 3D building models with a low
accuracy level. This paper describes the Automatic Building Detection (ABD) system and embedded algorithms
currently under development. The ABD system provides a framework for the automatic detection of buildings and the
recovery of 3D geometric models from single monocular electro-optic imagery. The system is designed in order to cope
with multi-sensor imaging of arbitrary viewpoint variations, clutter, and occlusion. Preliminary results on monocular
airborne and spaceborne images are provided. Accuracy assessment of detected buildings and extracted 3D building
models from single airborne and spaceborne monocular imagery of real scenes are also addressed. Embedded algorithms
are evaluated for their robustness to deal with relatively dense and complicated urban environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is usually a required first processing step for such activities as surveillance, video tracking, change
detection, and remote sensing. Often, different sensors are used for the collection of the test and reference imagery. The
sensor phenomenology differences can present problems for automatic selection of registration algorithm parameters
because of different cross-sensor feature manifestation. In previous work involving edge-based multisensor image
registration, we applied a previously-developed automated approach to parameter selection, designed specifically for
edge detection. In this work, we adapt and apply a dynamic feature selection algorithm (DFSA) that we recently
developed for use in registration algorithm selection for registering images with varying scene content type. We adapt
and apply the DFSA to the problem of selecting appropriate registration algorithm parameter values in an edge-based
registration algorithm. The approach involves generating
test-to-reference feature match scores over a sampling of the
transform hypothesis space. The approach is scene-adaptive thereby requiring no a priori information on image scene
content. Furthermore, in the DFSA we leverage prior match score calculation generated in a hierarchical grid search to
reduce additional computational expense. We give a brief overview of the registration algorithmic framework, and
present a description of the dynamic feature selection algorithm. Numerical results are presented for performing test
SAR-to-reference EO image registration to show the registration convergence performance improvement resulting from
use of the DFSA. Numerical results are generated over images exhibiting different scene content types. We also evaluate
the effect of match score normalization on the registration convergence performance improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate geo-location of imagery produced from airborne imaging sensors is a prerequisite for precision targeting and
navigation. However, the geo-location metadata often has significant errors which can degrade the performance of
applications using the imagery. When reference imagery is available, image registration can be performed as part of a
bundle-adjustment procedure to reduce metadata errors. Knowledge of the metadata error statistics can be used to set the
registration transform hypothesis search space size. In setting the search space size, a compromise is often made between
computational expediency and search space coverage. It therefore becomes necessary to detect cases in which the true
registration solution falls outside of the initial search space. To this end, we develop a registration verification metric, for
use in a multisensor image registration algorithm, which measures the verity of the registration solution. The verification
metric value is used in a hypothesis testing problem to make a decision regarding the suitability of the search space size.
Based on the hypothesis test outcome, we close the loop on the verification metric in an iterative algorithm. We expand
the search space as necessary, and re-execute the registration algorithm using the expanded search space. We first
provide an overview of the registration algorithm, and then describe the verification metric. We generate numerical
results of the verification metric hypothesis testing problem in the form of Receiver Operating Characteristics curves
illustrating the accuracy of the approach. We also discuss normalization of the metric across scene content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to detect significant change in images is reduced when there is parallax. Fixed objects above or below the
ground plane displace from their true position by an amount that depends on their height and the look angle of the sensor,
reducing background cancellation and limiting the effectiveness of change detection. A method for reducing the effect of
parallax shifts using dynamic time warping (DTW) to align images along the epipolar direction is described. A
performance model is developed for predicting the processing gain of change detection (CD) over single image object
detection as a function of the parallax. The model is used to predict CD performance with, and without DTW. A 10x
improvement in processing gain is demonstrated on optical imagery, which results in a significant reduction in the
number of false alarms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A feature-based approach for detecting anomalies in spectral, spatial, temporal, and other domains is described. When
the frequency of occurrence is small relative to the background, anomalies such as man-made objects in natural image
backgrounds do not form their own clusters, but are instead assigned the nearest background cluster, becoming an outlier
(statistical anomaly) in that cluster. Our method clusters data, which may be spectral, spatial, or temporal in nature, into
one or more background types and computes the Mahalanobis distance between the data and assigned model
(background cluster). The detection of a variety of objects and phenomena in panchromatic and multispectral imagery,
and video are illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR imaging has been extensively used in several applications including automatic target detection and recognition. In
this paper, a wavelet/fractal (WF)-based target detection technique is presented. The technique computes a fractal-based
feature on an edge image, as opposed to existing fractal methods that compute the fractal dimension on the original
image. The edge image is produced through the use of wavelets. The technique is evaluated for target detection in SAR
images, and compared with a previous fractal-based approach, namely the extended fractal (EF) model. Experimental
results illustrate that WF provides lower false alarm rates for the same probability of detection compared to EF.
Furthermore, it is shown that WF provides higher spatial resolution capabilities for the detection of closely located
targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effective missile warning and countermeasures continue to be an unfulfilled goal for the Air Force and DOD
community. To make the expectations a reality, sensors exhibiting the required sensitivity, field of regard, and spatial
resolution are being pursued. The largest concern is in the first stage of a missile warning system, detection, in which all
targets need to be detected with a high confidence and with very few false alarms. Typical sensors are limited in their
detection capability by the presence of heavy background clutter, sun glints, and inherent sensor noise. Many threat
environments include false alarm sources like burning fuels, flares, exploding ordinance, and industrial emitters.
Multicolor discrimination is one of the effective ways of improving the performance of missile warning sensors,
particularly for heavy clutter situations. Its utility has been demonstrated in multiple fielded systems. Utilization of the
background and clutter spectral content, coupled with additional spatial and temporal filtering techniques, have resulted
in a robust adaptive real-time algorithm to increase signal-to-clutter ratios against point targets. The algorithm is
outlined and results against tactical data are summarized and compared in terms of computational cost expected to be
implemented on a real-time field-programmable gate array (FPGA) processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target segmentation plays an important role in the entire target
tracking process. This process decides whether the current pixel
belongs to the target region or not. In the previous works, the
target region was extracted according to whether the intensity of
each pixel is larger than a certain value. But simple binarization
using one feature, i.e. intensity, can easily fail to track as
condition changes. In this paper, we employ more features such as
intensity, deviation over time duration, matching error, etc.
rather than intensity only and each feature is weighted by the
weighting logic, which compares the characteristics in the target
region with that in the background region. The weighting logic
gives a higher weight to the feature which has a large difference
between the target region and the background region. So the
proposed segmentation method can control the priority of features
adaptively and is robust to the condition changes of various
circumstances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time color transfer system based on 3 pieces of multi-media DSP TM1300 for low-light level
visible(LLLV) and infrared(IR) images is built. Computing quantity is split among three TM13003. Two pieces of
TM1300 preprocess the dual-band images and calculate their mean and standard deviation respectively. The third
TM1300 executes fusion and color transfer in YUV space: Firstly the two preprocessed images are fused into one
primary color image(source image) in which hot targets present warm color, cold targets present cool color. Then the
mean and standard deviation of source images in Y, U, V components are deduced by preprocessed images pixel value
and their mean and standard deviation. Finally, the Y, U and V component of source image are scale by the variation ratio
of a day-time color image(target image) to the source image. The color and luminance distribution of the target image is
transferred into source image and makes it present a sort of day-time color appearance. Comparing to the usually used
l&agr;&bgr; space, color transfer in YUV space can avoid iterative color space transformation, logarithmic and exponential
calculation, and thus be effective in real-time realization while the color transferred results are acceptable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a fuzzy rule base system for object-based feature extraction and classification on remote sensing imagery. First, the object primitives are generated from the segmentation step. Object primitives are defined as individual regions with a set of attributes computed on the regions. The attributes computed include spectral, texture and shape measurements. Crisp rules are very intuitive to the users. They are usually represented as "GT (greater than)", "LT (less than)" and "IB (In Between)" with numerical values. The features can be manually generated by querying on the attributes using these crisp rules and monitoring the resulting selected object primitives. However, the attributes of different features are usually overlapping. The information is inexact and not suitable for traditional digital on/off decisions. Here a fuzzy rule base system is built to better model the uncertainty inherent in the data and vague human knowledge. Rather than representing attributes in linguistic terms like "Small", "Medium", "Large", we proposed a new method for automatic fuzzification of the traditional crisp concepts "GT", "LT" and "IB". Two sets of membership functions are defined to model those concepts. One is based on the piecewise linear functions, the other is based on S-type membership functions. A novel concept "fuzzy tolerance" is proposed to control the degree of fuzziness of each rule. The experimental results on classification and extracting features such as water, buildings, trees, fields and urban areas have shown that this newly designed fuzzy rule base system is intuitive and allows users to easily generate fuzzy rules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital and analog design approaches are reviewed for handheld
low-cost electronic signal processing boxes
for close-up optical detection and identification of phosphor markers for authentication of paper money, legal
documents, pharmaceuticals, clothing materials, and military friend and foe identification. For extending the
range to longer distances of over a meter (several feet) we propose a novel low-cost handheld lock-in amplifier
that uniquely identifies a phosphor at a distance of several feet in a noisy environment of daylight, sunlight,
electronic noise and reflection of the stimulating beam. The lock-in amplifier differs from a conventional one by
sampling the detector out of synchronization with the source to avoid reflections which will mask the phosphor
luminescence and provide opportunities for counterfeiters. The luminescence decays slowly after stimulation
is removed. We simulate the lock-in amplifier to determine the
trade-off between speed of authentication and distance. Only 40ms of integration in the lock-in amplifier will block noise of frequencies differing by more than 1% from the modulation frequency to allow authentication over a meter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of automatic modulation classification is to identify the modulation type of a received signal from the
signal parameters. Modulation classification has both military and civilian applications and has been the subject
of intensive research for more than two decades. In this paper we use a hierarchical neural network in which the
first network identifies the modulation class while a second set of networks identify the constellation size (order)
of that modulation class. The set of features we use include normalized standard deviations of amplitude, phase
and frequency, as well as the fourth and sixth order cumulants of the signal samples. Identifying the constellation
size of quadrature amplitude modulation (QAM) has been particularly difficult in the past. In this paper we
introduce two new approaches for computing the features of a QAM signal. The first uses the concatenated in-
phase and quadrature components of the signal to compute the features. The second method maps the in-phase
and quadrature components to the first quadrant of the constellation by calculating the absolute value of each
separately. The mean of the resulting constellation points is then subtracted before calculating the features.
Simulation results are presented for classification of several digital modulation schemes including FSK, PSK,
ASK and QAM. Our results show that the proposed method significantly improves the classification error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a real-time 3-D tracking system which using two cameras with substantially arbitrary
geometries. The primary goal of the proposed system is to capture incoming stereo vision feeds, examine and compare
targets across the cameras, and using a derived camera calibration matrix, project them in real-world 3-D coordinates, in
real time. The system is divided into two main components: Camera calibration, and tracking and cross-camera object
matching. In the proposed system, algorithms such as 8-point feature matching form the basis for the camera
calibration/pose estimation mechanism, while color histogram forms a robust feature for tracking across-camera The
proposed system is robust and applicable to heterogeneous moving objects such as people, vehicles, and boats.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A unique polarization camera has been fabricated out of a wire grid polarizer attached to the surface of a
InGaAs FPA. The wire grid was configured as a Stokes polarimeter. Data has been collected for both space
and earthbound applications using both active and passive illumination. A mini-range and scaled targets of
representative materials were constructed to simulate space based distances for both resolved and
unresolved targets. For the purpose of providing advanced warning for rotorcraft, data has been collected on
power lines to test the feasibility and appropriateness of this type of technology to aid in their detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a model for calculating the Spatial Frequency Response (SFR) for Bayer pattern color detectors. The model
is based on the color detector response to B/W scenes. When a Bayer color detector is compared to a B/W detector, SFR
difference results from the interpolation process. This process exists only in the Bayer pattern detectors. In this work we
ascribe the MTF and the spurious response to the interpolation process.
The model may be applied to any linear interpolation. Although the interpolation is linear, it is not Shift Invariant
(SI). Therefore, calculating the interpolation MTF is not a trivial task. Furthermore, the interpolation creates a spurious
response. In order to calculate the interpolation SFR, we introduce a separable constraint (for x and y directions) by using
a scene that varies only on one axis and is fixed on the other. We further assume integration in the direction of the fixed
axis. By using these two assumptions, we have been able to separate the response into two axes and calculate the SFR.
For distant scenes, colors saturation decreases, the colors are less visible and mostly grey colors are sensed. In these
cases the Johnson Criterion can be roughly applied. In order to apply the Johnson Criterion it is required to know the
MTF of the sensing system. The sensing system MTF includes the interpolation MTF. We show that the interpolation
process degrades the system performance compared to B/W sensor. Another application of the model is in comparing
different interpolation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Different methods of energy estimation for a differential absorption lidar (DIAL) system at NASA Langley Research
Center in Virginia are investigated in this paper. The system is a 2- &mgr;m wavelength coherent Doppler lidar called
VALIDAR that has been traditionally used for measuring wind. Recent advances in laser wavelength control have
allowed the new use of this lidar for measuring atmospheric CO2 concentration by a DIAL technique. In order to realize
accurate DIAL measurements, optimal signal processing techniques are required to represent the energy of the
heterodyned backscatter signals. The noise energy was estimated by minimizing the mean square error in its estimate
and was used to normalize its adverse influence on accurate estimation of the concentration of CO2 in the atmosphere.
The impact of different methods on the statistics of CO2 concentration measurements is compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of optical sensor systems is based on the interaction between the photons of the electromagnetic radiation and suspended
particles in water. The objectives of this study area are to design and develop a dwi-detector optical sensor for measuring the
concentration of total suspended solids, TSS, in polluted water samples. An algorithm which requires calibration analysis has been
derived for estimating TSS concentrations. The proposed optical system uses a single light emitting diode, LED, as an emitter and two
phototransistors are used as detectors. Detected radiations were measured at scattering angles of 90° and 180° between the source and the detectors. The algorithm produced a high correlation coefficient and low root mean square error value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an approach which can reduce envelope delay effectively to improve traditional filter. In some
applications, traditional filter is applied to get the envelope of signal, but there is long envelope delay using traditional
filter which is not suitable for real time systems, such as ground moving target detection in wireless sensor network. This
paper presents a weighted filter approach to reduce envelope delay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new receiver design for spatially distributed
apertures to detect targets in an urban environment.
A distorted-wave Born approximation is used to model the scattering
environment. We formulate the received signals at different
receive antennas in terms of the received signal at the first
antenna. The detection problem is then formulated as a binary
hypothesis test. The receiver is chosen as the optimal linear filter
that maximizes the signal-to-noise ratio (SNR) of the
corresponding test statistic. The receiver operation amounts to
correlating a transformed version of the measurement at the first
antenna with the rest of the measurements. In the
free-space case the transformation applied to the measurement from the
first
antenna reduces to a delay operator. We evaluate the performance of
the receiver on a real data set collected in a multipath- and
clutter-rich urban environment and on simulated data corresponding to a simple
multipath scene. Both the experimental and simulation results show that
the proposed receiver design offers significant improvement in
detection performance compared to conventional matched
filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.