PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In maritime operations, target tracking and localization, also called target motion analysis (TMA), is an important issue. If an active sensor is used, the tracking process will be observable since we can predict the target range and bearing without any difficulty. The major disadvantage of using the active sources is that the enemy's targets can easily detect the ship position. Thus, tracking using active sources become a risky proposition. The alternative is to use passive tracking, but in this case the tracking process will be unobservable because we can only measure the target bearing. The range can be estimated via triangularization by using at least two platforms. Another method is to try to find the range using a geometrical approach to have at least one accurate range and then we can use it to construct the track under some assumptions. In this paper, a geometrical approach to bearing-only tracking is introduced. The target range is derived using few bearing measurements. Several own ship-target geometries have been set up for this purpose. To compute the target range, it is required that the own ship execute an admissible maneuver. The geometrical approach presented provides an acceptable performance and can be used for a short time period in the tracking process to provide a reasonable estimate of the range and then the tracker can use this range to generate the target track and hence reduce the bias.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A single, stationary observer cannot determine a unique target track with bearings-only measurements. In the land environment, for tactical reasons, the observer typically remains stationary but can measure the target range by a laser rangefinder (LRF). Bearings-only tracking of a non-maneuvering target is a non-linear problem. Solutions by iteration or the extended Kalman filter suffer from a high computation load and possible filter divergence. In contrast, the pseudo-linear formulation permits the application of a linear Kalman filter but the range estimate has a bias, which eliminates through instrumental variables. The development in showed that even though a target track is indeterminate due to a stationary observer, a unique target heading is still available from the bearings-only measurements. Then after an LRF range measurement, Rl, future estimates of target position and velocity become determinant. This paper gives a new tracking scheme for a stationary observer that gives the range estimate as a function of Rl, the target heading and bearings. The estimation equation comes from the trigonometric Law of Sines and is simple to implement. The estimator is unbiased and simulation experiments have shown that the estimates are close to the Cramer-Rao Lower Bound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Interacting Multiple Model (IMM) estimator is a suboptimal hybrid filter that has been shown to be one of the most cost-effective hybrid state estimation schemes. The algorithm has the ability to estimate the state of a dynamic system with several modes which can switch from one mode to another. It is also considered to be the best compromise between the complexity and the performance. It is mainly used for tracking highly maneuvering targets in the presence of clutter by invoking the Probabilistic Data Association (PDA) in the estimator structure, also called IMM-PDA. Recently, it has been shown that the PDA technique does not perform well when tracking targets at low signal to noise ratios (SNR). An alternative technique to data association is the Fuzzy Data Association (FDA) which has the ability to track targets in clutter and in a low SNR environment. In this paper, an IMM-FDA technique is proposed for tracking highly maneuvering targets in clutter and in a low SNR environment. Simulations have been conducted to compare the performance of the proposed approach with that of the IMM-PDA. A typical scenario for a highly maneuvering target is considered as a tracking example. The simulation results reveal that both the trackers perform well when tracking the maneuvering target at high SNR. At low SNR, only the IMM-FDA is able to track the target accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops a multiple-frame multiple-hypothesis tracking (MF-MHT) method and applies it to the problem of maintaining track on a single moving target from dim images of the target scene. From measurements collected over several frames, the MF-MHT method generates multiple hypotheses concerning the trajectory of the target. Taken together, these hypotheses provide a smoothed and reliable estimate of the target state. This work supports TENET, an Air Force Research Lab. Project that is developing nonlinear estimation techniques for tracing. TENET software was used to simulate both target dynamics and sensor measurements over a series of Monte Carlo experiments conducted at various signal-to-noise ratios (SNRs). Results are presented that compare computational complexity and accuracy of MF-MHT to two previously-documented nonlinear approaches to predetection tracking, a finite difference scheme and a particle filter method. Results show that MF-MHT requires about 2-3 dB more SNR to compete with the nonlinear methods on an equal footing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is derived for signature-aided tracking which uses features (e.g. high-range resolution radar (HRRR) profiles), or functions of features, in addition to kinematic measurements to associate measurements to known tracks, clutter or new tracks. The approach taken here is to derive the probability of the measurement-to-track association hypotheses which incorporates the likelihood of features as well as the traditional approach of using the kinematic measurement likelihood. It is assumed that the probability density function (PDF) of the features (or some function of the features) is available from a library. The approach to probabilistically characterizing the PDF of the profiles relies on the availability of a class-specific library for each target type. The class-specific library of PDFs characterizes the profiles conditioned on the target class from which the profile originated and the aspect at which the profile was obtained. The algorithm is evaluated using the SLAMEM simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed techniques for Simultaneous Localization and Map Building based on the augmented state Kalman filter, and demonstrated this in real time using laboratory robots. Here we report the results of experiments conducted out doors in an unstructured, unknown, representative environment, using a van equipped with a laser range finder for sensing the external environment, and GPS to provide an estimate of ground truth. The goal is simultaneously to build a map of an unknown environment and to use that map to navigate a vehicle that otherwise would have no way of knowing its location. In this paper we describe the system architecture, the nature of the experimental set up, and the results obtained. These are compared with the estimated ground truth. We show that SLAM is both feasible and useful in real environments. In particular, we explore its repeatability and accuracy, and discuss some practical implementation issues. Finally, we look at the way forward for a real implementation on ground and air vehicles operating in very demanding, harsh environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously, we have developed techniques for Simultaneous Localization and Map Building based on the augmented state Kalman filter. Here we report the results of experiments conducted over multiple vehicles each equipped with a laser range finder for sensing the external environment, and a laser tracking system to provide highly accurate ground truth. The goal is simultaneously to build a map of an unknown environment and to use that map to navigate a vehicle that otherwise would have no way of knowing its location, and to distribute this process over several vehicles. We have constructed an on-line, distributed implementation to demonstrate the principle. In this paper we describe the system architecture, the nature of the experimental set up, and the results obtained. These are compared with the estimated ground truth. We show that distributed SLAM has a clear advantage in the sense that it offers a potential super-linear speed-up over single vehicle SLAM. In particular, we explore the time taken to achieve a given quality of map, and consider the repeatability and accuracy of the method. Finally, we discuss some practical implementation issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Goal lattices are a method for ordering the goals of a system and associating with each goal the value of performing that goal in terms of how much it contributes to the accomplishment of the topmost goal of a system. Since the lowest goals in the lattice are the real, measurable actions such as sensor observations, this method associates a value of taking an action with each possible action allowing one to rank those that are mutually exclusive. This paper presents the results of using the GMU Goal Lattice Engine (GMUGLE) to enter a set of goals and relative values for a reconnaissance mission management problems. The automatic expansion of the partially ordered set of goals to form a lattice which results from the strict application of a least upper bound (lub) and greatest lower bound (glb) for each pair of goals is documented as well as the automatic creation of pseudo-goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern naval battleforces generally include many different platforms each with onboard sensors such as radar, ESM, and communications. The sharing of information measured by local sensors via communication links across the battlegroup should allow for optimal or near optimal decisions. A fuzzy logic algorithm has been developed that automatically allocates electronic attack (EA) resources in real-time. The fuzzy logic approach allows the direct incorporation of expertise allowing decisions to be made based on these rules. Genetic algorithm based optimization is conducted to determine the form of the membership functions for the fuzzy root concepts. The resource manager is made up of five parts, the isolated platform model, the multi-platform model, the communication model, the fuzzy parameter selection tree and the fuzzy strategy tree. Automatic determination of fuzzy decision tree structure using a genetic program, an algorithm that creates other computer programs is discussed. A comparison to a tree obtained using a genetic program and one constructed based on expertise is made. The automatic discovery through genetic algorithms of multi-platform techniques, rules and strategies is discussed. Two new multi-platform power allocation algorithms based on fuzzy number theory and linear and nonlinear programming are introduced. Methods of validating the algorithms are examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of managing swarms of UAVs consists of multi-agent collection (i.e., distributed robust data fusion and interpretation) and multi-agent coordination (i.e., distributed robust platform and sensor monitoring and control). This paper deals with a specific aspect of the latter, namely collision avoidance and trajectory generation problems in the presence of pop-up threats. These problems are difficult to solve because the set of feasible solutions is non-convex, possibly infinite dimensional and defined in terms of an infinite number of constraints. Recent approximation methods use a finite dimensional parameterization of solutions and impose constraints on a finite grid in time. They result in large feasibility problems and are at best only sufficient. We show that there is no need to grid if the admissible trajectories are restricted to polynomials of degree no larger than a specified bound. A finite dimensional necessary and sufficient condition, whose size depends only the assumed degree bound, is derived in this paper. It is used to improve some existing trajectory generation methods, develop new approximation methods and solve a collision avoidance problem exactly. Our techniques clearly show how certain non-convex constraints arising in path planning can be converted to mixed integer/LMI constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Although benefits may be realized through ATR processing of a single data source, fusion of information across multiple images and multiple sensors promises significant performance gains. A major challenge, as ATR fusion technologies mature, will be the establishment of sound methods for evaluating ATR performance in the context of data fusion. This paper explores the issues associated with evaluations of ATR algorithms that exploit data fusion. Three major areas of concern are examined, as we develop approaches for addressing the fusion-based evaluation problem: Characterization of the testing problem: The concept of operating conditions, which characterize the test problem, requires some generalization in the fusion setting. For example, conditions such as articulation or model variant, which are of concern for synthetic aperture radar (SAR) data, may be of minor importance for hyperspectral imaging (HSI) methods. Conversely, solar illumination conditions, which have no effect on the SAR signature, will be critical for spectral based target recognition. In addition, the fusion process may introduce new operating conditions, such as registration accuracy. Developing image truth and scoring rules: The introduction of multiple data sources raises questions about what constitutes successful target detection. Ground truth must be associated with multiple data sources to score performance. Performance metrics: New performance metrics, that go beyond simple detection, identification, and false alarm rates, are needed to characterize performance in the context of image fusion. In particular, algorithm developers would benefit from an understanding of the salient features from each data source and how these features interact to produce the observed system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research demonstrates the application of decision analysis (DA) techniques to decisions made within Automatic Target Recognition (ATR) technology development. This work is accomplished to improve the means by which ATR technologies are evaluated. The first step in this research was to create a flexible decision analysis framework that could be applied to several decisions across different ATR programs evaluated by the Comprehensive ATR Scientific Evaluation (COMPASE) Center of the Air Force Research Laboratory (AFRL). For the purposes of this research, a single COMPASE Center representative provided the value, utility, and preference functions for the DA framework. The DA framework employs performance measures collected during ATR classification system (CS) testing to calculate value and utility scores. The authors gathered data from the Moving and Stationary Target Acquisition and Recognition (MSTAR) program to demonstrate how the decision framework could be used to evaluate three different ATR CSs. A decision-maker may use the resultant scores to gain insight into any of the decisions that occur throughout the lifecycle of ATR technologies. Additionally, a means of evaluating ATR CS self-assessment ability is presented. This represents a new criterion that emerged from this study, and no present evaluation metric is known.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Wavelet Domain is explored for use in Image Correspondence Correction. Image Correspondence Correction is the alteration of one image to correspond more closely to a matching image taken at a different time and under different photometric conditions. The ability to perform such an alteration would be highly useful for applications like Surveillance, Terrain Database Collection, Exploration and Map Building. We explore the use of both global and localized features in both the Wavelet and Spatial domains for this task. Global features may be able to correct for image differences arising due to global changes, like lighting conditions. However, global features may have difficulty when dealing with locally varying conditions. We show that local features on the other hand can be more sensitive to noise than global features and propose fixes to this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusing information from sensors with very different phenomenology is an attractive and challenging option for autonomous target acquisition (ATA) systems because correct target detections should correlate between sensors while false alarms might not. In this paper, we present a series of algorithms for detecting and segmenting targets from their background in passive millimeter wave (PMMW) and laser radar (LADAR) data. PMMW sensors provide a consistent signature for metallic targets. They also can effectively operate under adverse weather conditions, however they exhibit poor angular resolution. LADAR sensors produce high-resolution range and reflectance images, but are sensitive to adverse weather conditions. Sensor fusion techniques are applied with the goal of maintaining high probability of detection while decreasing the false alarm rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There have been many years of research and development in the Automatic Target Recognition (ATR) community. This development has resulted in numerous algorithms to perform target detection automatically. The morphing of the ATR acronym to Aided Target Recognition provides a succinct commentary regarding the success of the automatic target recognition research. Now that the goal is aided recognition, many of the algorithms which were not able to provide autonomous recognition may now provide valuable assistance in cueing a human analyst where to look in the images under consideration. This paper describes the MUSIC system being developed for the US Air Force to provide multisensor image cueing. The tool works across multiple image phenomenologies and fuses the evidence across the set of available imagery. MUSIC is designed to work with a wide variety of sensors and platforms, and provide cueing to an image analyst in an information-rich environment. The paper concentrates on the current integration of algorithms into an extensible infrastructure to allow cueing in multiple image types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique is presented for resolving closely spaced objects when the point spread function is not well known . The technique uses a Bayesian approach without the use of contrived penalty terms for model complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
In conventional single-sensor, single-target statistics, many techniques depend on the ability to apply New- tonian calculus techniques to functions of a continuous variable such as the posterior density, the sensor likelihood function, the Markov motion-transition density, etc. Unfortunately, such techniques cannot be directly generalized to multitarget situations, because conventional multitarget density functions f(X) are inherently discontinuow with respect to changes in target number. That is, the multitarget state variable X experiences discontinuous jumps in its number of elements: X = 0, X = {xi}, X = {x1, x2},. . . In this paper we show that it is often possible to render a multitarget density function f(X) continuous and differentiable by extending it to a function f(X) of a fully continuous multitarget state variable X. This is accomplished by generalizing the concept of a point target, with state vector x, to that of a point target-cluster, with augmented state vector = (a, x). Here, * is interpreted as multiple targets co-located at target-state x, whose expected number is a < 0. Consequently, it becomes possible to define a Newtonian differential calculus of multitarget functions f(X) that can potentially be used in developing practical computational techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In certain applications it is sometimes not necessary to detect and track individual targets with accuracy. Examples include applications with very large track densities, in which the overall distribution of forces is of greater interest than individual targets; or group target processing, in which detection and tracking of force-level objects (brigades, battalions, etc.) is of greater interest than detection and tracking of the individual targets which constitute them. The usual strategy is to attempt to detect and track individual targets first and then deduce group behavior from them. The approach described in this paper employs the opposite philosophy: it detects and tracks target groupings first and sorts out individual targets only as data quantity and quality permits. It is based on a multitarget statistical analog of the simplest approximate single-target filter: the constant-gain Kalman filter. Our approximate multitarget filter propagates a first-order statistical moment of the entire multitarget system. This moment, the probability hypothesis density (PHD), is the density function whose integral in any region of state space is the expected number of targets in the region. We describe the behavior of an implementation of the PHD filter in some simple bulk-tracking scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the past two years in this conference, we have described techniques for robust identification of motionless ground targets using single-frame Synthetic Aperture Radar (SAR) data. By robust identification, we mean the problem of determining target ID despite the existence of confounding statistically uncharacterizable signature variations. Such variations can be caused by effects such as mud, dents, attachment of nonstandard equipment, nonstandard attachment of standard equipment, turret articulations, etc. When faced with such variations, optimal approaches can often behave badly-e.g., by mis-identifying a target type with high confidence. A basic element of our approach has been to hedge against unknowable uncertainties in the sensor likelihood function by specifying a random error bar (random interval) for each value of the likelihood function corresponding to any given value of the input data. Int his paper, we will summarize our recent results. This will include a description of the fuzzy maximum a posteriori (MAP) estimator. The fuzzy MAP estiamte is essentially the set of conventional MAP estimates that are plausible, given the assumed uncertainty in the problem. Despite its name, the fuzzy MAP is derived rigorously from first probabilistic principles based on random interval theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper develops a multirate filtering/smoothing approach for out-of-sequence (OOS) measurements. There are two major steps in OOS filtering/smoothing: retrospection from current time to OOS time (smoothing) and updating the current estimate with the OOS measurement (filtering), which imposes a high computation and memory burden on implementing OOS filtering. The multirate approach provides an excellent framework for efficient information retrospection and forward update. A multirate interacting multiple model (MRIMM) filter is developed to track a target with or without maneuvering behavior in an environment of our-of- sequence measurement reporting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work presented here is a continuation of research first reported in Mahler, et. al. Our earlier efforts included integrating the Statistical Features algorithm with a Bayesian nonlinear filter, allowing simultaneous determination of target position, velocity, pose and type via maximum a posteriori estimation. We then considered three alternative classifiers: the first based on a principal component decomposition, the second on a linear discriminant approach, and the third on a wavelet representation. In addition, preliminary results were given with regards to assigning a measure of confidence to the output of the wavelet based classifier. In this paper we continue to address the problem of target classification based on high range resolution radar signatures. In particular, we examine the performance of a variant of the principal component based classifier as the number of principal components is varied. We have chosen to quantify the performance in terms of the Bhattacharyya distance. We also present further results regarding the assignment of confidence values to the output of the wavelet based classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In classification problems where multiple features are extracted from the observations of one or more sensors, the features often exhibit some degree of correlation, or a functional relationship. Frequently, this is expected and arises because of the mapping between the parameters that define the object's equation of state and the sensor observables. Therefore, it is of interest to develop representations of the objects and classification algorithms that exploit the correlations between the features. An approach for developing these types of representations makes use of Differential Geometry. In this approach, the objects are represented as a mean surface in feature space. When the functional relationship between features can be expressed analytically, Differential Geometry is used to develop analytical expressions for class surfaces and classification algorithms. More complex problems require the use of numerical techniques. In this paper, some of the mathematical foundations of this approach are reviewed. In an example, tensor product non-uniform rational b-splines are employed to develop the description of class surfaces along with the associated metric tensor and geodesic equations, leading to classification algorithms. The resulting Surface Classifier performance is compared with that of a traditional Quadratic Classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particle-based nonlinear filters have proven to be effective and versatile methods for computing approximations to difficult filtering problems. We introduce a novel hybrid particle method, thought to possess an excellent compromise between the unadaptive nature of the weighted particle methods and the overly random resampling in classical interactive particle methods, and compare this new method to our previously introduced refining branching particle filter. Our experiments involve various fixed numbers of particles and compare computational efficiency of our new method to the incumbent. The hybrid method is demonstrated to outperform two previous particle filters on our simulated test problems. To highlight the flexibility of particle filters, we choose to test our methods on a rectangularly-constrained Markov signal that does not satisfy a typical stochastic equation but rather a Skorohod, local time formulation. Whereas normal diffusive behavior occurs in the interior of the rectangular domain, immediate reflections are enforced at the boundary. The test problems involve a fish signal with boundary reflections and is motivated by the fish farming industry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the last three years at this conference we have been describing the implementation of a unified, scientific approach to performance estimation for various aspects of data fusion: multitarget detection, tracking, and identification algorithms; sensor management algorithms; and adaptive data fusion algorithms. The proposed approach is based on finite-set statistics (FISST), a generalization of conventional statistics to multisource, multitarget problems. Finite-set statistics makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems in such a way that information can be defined and measured even though any given end-user may have conflicting or even subjective definitions of what informative means. In this presentation, we will show how to extend our previous results to two new problems. First, that of evaluating the robustness of multisensor, multitarget algorithms. Second, that of evaluating the performance of multisource-multitarget threat assessment algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In multi-hypothesis target tracking, given the time-predicted tracks, we consider the sensor management problem of directing the sensors' Field of View (FOV) in such a way that the targets detection rate is improved. Defining a (squared) distance between a sensor and a track as the (squared) Euclidean distance between the centers of their respective Gaussian distributions, weighted by the sum of the covariance matrices, the problem is formulated as the minimization of the Hausdorff distance from the set of tracks to the set of sensors. An analytical solution for the single sensor case is obtained, and is extended to the multiple sensors case. This extension is achieved by performing the following: (1) It is first proved that for an optimal solution, there exists a partition of the set of tracks into subsets, and an association of each subset with a sensor, such that each subset-sensor pair is optimal in the Hausdorff distance sense; (2) a brute force search is then conducted to check all possible subset-partitions of the tracks as well as the permutations of sensors; (3) for each subset-sensor pair, the optimal solution is obtained analytically; and (4) the configuration with the smallest Hausdorff distance is declared as the optimal solution for the given multi-target multi-sensor problem. Some well established loopless algorithms for generating set partitions and permutations are implemented to reduce the computational complexity. A simulation result demonstrating the proposed sensor management algorithm is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
The 1999 Joint Director of Labs (JDL) revised model incorporates five levels for fusion methodologies including level 0 for preprocessing, level 1 for object refinement, level 2 for situation refinement, level 3 for threat refinement, and level 4 for process refinement. The model was developed to define the fusion process. However, the model is only for automatic processing of a machine and does not account for human processing. Typically, a fusion architecture supports a user and thus, we propose a Level 5 User refinement to delineate the human from the machine in the process refinement. Typical human in the loop models do not deal with a machine fusion process, but only present the information to the human on a display. We seek to address issues for designing a fusion system which supports a user: trust, workload, attention and situation awareness. In this paper, we overview the need for a Level 5, the issues concerning the human for realizable fusion architectures, and examples where the human is instrumental in the fusion process such as group tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concepts of distributed decision fusion have shown considerable interest for many years, gaining its start with Tenney and Sandell in the early Eighties. Since then Bayesian detection fusion has shown a great deal of progress, adding considerable depth and flexibility to the decision process. It has also added a comparative degree of complexity to the evaluation process. This paper will address these complexities in terms of fusion performance. In addition, we will show that the CFAR process can be an effective means of fusion rule selection. The conditions under evaluation involve the use of three image change detection algorithms (two using SAR images, and one using Electro-Optical Imagery). Each change detection algorithm provides a unique observation of the environment. The Adaptive Boolean Decision Fusion (ABDF) process provides a basis for fusing and interpreting these change events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic detection of significant changes in imagery is important in a number of intelligence, surveillance, and reconnaissance (ISR) tasks. An automated capability known as the Order of Battle Change Fusion (OBCF) system is described for detecting, fusing, and tracking changes over time in multi-sensor imagery. OBCF uses multiple change detection algorithms to exploit different aspects of change in multi-sensor images, normalcy models that provide a physical basis for detecting change and estimating the performance of change detection algorithms, algorithm fusion to combine the results from multiple change detection algorithms in order to enhance and maintain performance over changing operating conditions, and stationary tracking to provide a seamless history of image changes over time across different sensing modalities. Preliminary experimental results using electro-optical (EO) and synthetic aperture radar (SAR) imagery are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on a multi-valued mapping from a probability space (X,(Omega) ,Rmu) to space S, a probability measure over a class 2s of subsets of S is defined. Then using the product combination rule of multiple information sources, the Dempster-Shafer combination rule is derived. The investigation of the two rules indicates that the Dempster rule and the Dempster-Shafer combination rule are for different spaces. Some problems of the Dempster-Shafer combination rule are interpreted via the product combination rule that is used for multiple independent information sources. A technique to improve the method is proposed. Finally, an error in multi-valued mappings in [20] is pointed out and proved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of pattern recognition, more specifically in the area of supervised and feature-vector-based classifications, various classification methods exist but none of them can return always right results for any given kind of data. Each classifier behaves differently, having its own strengths and weaknesses. Some are more efficient then others in particular situations. Performances of these individual classifiers can be improved by combining them into one multiple classifier. In order to make more realistic decisions, the multiple classifier can analyze internal values generated by each classifier and can also rely on statistics learned from previous tests, such as reliability rates and confusion matrix. Individual classifiers studied in this project are Bayes, k-nearest neighbors, and neural network classifiers. They are combined using the Dempster-Shafer theory. The problem simplifies in finding weights that best represent individual classifier evidences. A particular approach has been developed for each of them, and for all of them it has been proven better to rely on classifiers internal information rather than statistics. When tested on a database comprised of 8 different kinds of military ships, represented by 11 features extracted from FLIR images, the resulting multiple classifier has given better results than others reported in the literature and tested in this work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situation analysis is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of situation awareness, for the decision maker. Data fusion is a key enabler to meeting the demanding requirements of military situation analysis support systems. According to the data fusion model maintained by the Joint Directors of Laboratories' Data Fusion Group, impact assessment estimates the effects on situations of planned or estimated/predicted actions by the participants, including interactions between action plans of multiple players. In this framework, the appraisal of actual or potential threats is a necessary capability for impact assessment. This paper reviews and discusses in details the fundamental concepts of threat analysis. In particular, threat analysis generally attempts to compute some threat value, for the individual tracks, that estimates the degree of severity with which engagement events will potentially occur. Presenting relevant tracks to the decision maker in some threat list, sorted from the most threatening to the least, is clearly in-line with the cognitive demands associated with threat evaluation. A key parameter in many threat value evaluation techniques is the Closest Point of Approach (CPA). Along this line of thought, threatening tracks are often prioritized based upon which ones will reach their CPA first. Hence, the Time-to-CPA (TCPA), i.e., the time it will take for a track to reach its CPA, is also a key factor. Unfortunately, a typical assumption for the computation of the CPA/TCPA parameters is that the track velocity will remain constant. When a track is maneuvering, the CPA/TCPA values will change accordingly. These changes will in turn impact the threat value computations and, ultimately, the resulting threat list. This is clearly undesirable from a command decision-making perspective. In this regard, the paper briefly discusses threat value stabilization approaches based on neural networks and other mathematical techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The author previously published a unified perceptual reasoning system framework for adaptive sensor fusion and situation assessment. Ths framework is re-examined to highlight the role of human perceptual reasoning and to establish the relationship between human perceptual reasoning and the Joint Director of Laboratories (JDL) fusion levels. Mappings between the fusion levels and the elements of perceptual reasoning are defined. Methods to populate the knowledge bases associated with each component of the perceptual reasoning system are highlighted. The concept and application of perception, the resultant system architecture and its candidate renditions using distributed interacting software agents (ISA) are discussed. The perceptual reasoning system is shown to be a natural governing mechanism for extracting, associating and fusing information from multiple sources while adaptively controlling the fusion level processes for optimum fusion performance. The unified modular system construct is shown to provide a formal framework to accommodate various implementation alternatives. The application of this architectural concept is illustrated for distributed fusion systems architectures and is sued to illustrate the benefits of the adaptive perceptual reasoning system concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Georegistration of an image typically requires either 3-5 control points measured in the target image or 6-10 tie points to at least two georegistered reference images. Often control points are not available, and tie points are difficult to find across sensor types, particularly for automatic processes. This work shows that registration can be achieved by measuring 3-5 lines in a target image and two reference images. The same ultimate registration accuracy can be achieved with tie points alone, lines alone, or a combination of both. Line triangulation enables automatic cross-sensor georegistration since lines can be found reliably across sensor types. Lines are measured by indicating two or more image positions on corresponding lines in each image. There is no need to identify corresponding points between images. There is no need for a priori line information, but such information can be exploited. Initial estimates of the lines can be made from the image measurements and a priori sensor models. The evaluation of image registration accuracy is discussed. Examples of image registration with line triangulation are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature extraction is a major processing step in pattern recognition. To classify similar objects into the correct object class the selected image features should represent the desired objects invariance. This means any two objects, which are similar according to the given similarity postulate, should have identical features so that the classificator maps them to the same object class. If the similarity postulate requires invariance under translation, scaling, and rotation, then geometric moments have been shown to exhibit appropriate properties. As an extension to the traditional use of geometric moments it is possible to assign physical dimensions to geometric moments. By this means the application of dimensional analysis becomes possible. For the case of color images the spectral power distribution can be used directly to derive dimensionless features for color objects. The construction of these dimensionless color features and their properties for color object classification will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of meteorological radar reflectivity Z to estimate rainfall rate R is approached using a different perspective from the classical Z-R relation. Simultaneous rain measurements from different sensors are combined to construct a model that estimates the vertical air velocity by minimizing the error in reflectivity between the different sensors. This model is based on the fact that rain rate and reflectivity are both dependent on the integrals of rain drop size distribution (DSD) but only R depends on vertical air velocity. This study attempts to validate the vertical air velocity estimates and quantify their affects on the rainfall rate estimation. Disdrometer Flux Conservation Model (DFC) uses measurements from disdrometers and other sensors such as vertically pointing radar profilers and scanning radars. Disdrometers measure a drop size flux (Phi) (D), defined as the number of drops passing a horizontal surface per unit time, per unit area, per drop size. The flux is equal to the product of the drop size distribution near the ground NG(D) and drop velocity near the ground vG(D). The drop velocity is the difference between the droplet terminal velocity and the vertical component of the wind velocity, which varies with altitude. The estimates derived from the DFC model using two pair wise selected sensors are used to study the change of reflectivity and vertical air velocity with altitude. Sensitivity tests for the DFC model are also discussed and these outcomes are validated by comparison with independent profiler vertical velocity observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
A nonlinear mean square estimation algorithm for cross-sensor image fusion and spectral anomaly detection is described. The algorithm can be used to enhance a low resolution image with a higher resolution coregistered multispectral image, and to detect anomalies between spectral bands (features in one spectral band that do not occur in other bands). Experimental results for Landsat data are presented illustrating the spatial enhancement of thermal imagery, the detection of thermal anomalies (heat sources), and the detection of smoke plumes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.