This course describes sensor and data fusion methods that improve the probability of correct target detection, classification, and identification. The methods allow the combining of information from collocated or dispersed sensors that utilize similar or different operating phenomenologies. Examples provide insight as to how different phenomenology-based sensors enhance a data fusion system. After introducing the JDL data fusion model, sensor and data fusion architectures are described in terms of sensor-level, central-level, and hybrid fusion, and pixel-, feature-, and decision-level fusion. The exploration of data fusion algorithm taxonomies provides an introduction to the algorithms and methods utilized for detection, classification, identification, and state estimation and tracking – the Level 1 fusion processes. These algorithms support the higher-level data fusion processes of situation and impact assessment. Subsequent sections of the course more fully develop the Bayesian, Dempster-Shafer, and voting logic data fusion algorithms. Examples abound throughout the material to illustrate the major techniques being presented. The illustrative problems demonstrate that many of the data fusion methods can be applied to combine information from almost any grouping of sensors as long as the input data are of the types required by the fusion algorithm.
This course describes target tracking and state estimation methods commonly used when multiple radar systems provide measurement and/or track data for data fusion. The process usually begins with a decision to apply a sensor-data driven or target-track driven approach for estimating a target’s true state, followed by selection of data or track correlation and association techniques. Several techniques for data and track association are introduced including the deterministic Nearest Neighbor (NN) and Global NN algorithms, and the probabilistic procedures consisting of Joint Probabilistic Data Association, Deferred Decision Multiple Hypothesis Tracking, Track Splitting, and Maximum Likelihood. Position, kinematic, and attribute estimation are introduced as complementary techniques for combining multiple measurements to improve the outcomes of the state estimation process. Subsequent sections of the course introduce radar tracking system functions and design constraints; attributes of radar detections, measurements, and tracks; state space and coordinate conversion procedures required in multisensor tracking systems; multiple-sensor registration and its impact on tracking accuracy; and the sequential probability ratio test for track initiation.
The next units define Kalman filtering as an exceptional case of the Bayes filter that estimates the target’s true state at the predicted time of the next observation using a linear combination of a prior state estimate and a weighted difference between an actual noisy measurement and a measurement prediction. The Kalman filter equations, filtering process, filter initialization procedures, and the error covariance and Kalman filter recursive equations are described. A discussion of the need and methods for maintaining the Kalman gain at a sufficiently large value is provided and models for the process noise are introduced. Alternatives to the Kalman filter are noted for application to nonlinear systems. When a tracked object engages in a maneuver, it is often necessary to introduce additional kinematic models that account for the possible maneuvers. Thus, Interacting Multiple Models are discussed as a method to treat this occurrence. The concluding sections of the course review multiple-sensor tracking architectures, the maturity of data fusion systems, and continuing challenges in fusion system assessment.