17 May 2006 Model-based multisource fusion for exploitation, classification, and recognition
Author Affiliations +
A model-based multi-sensor fusion framework has previously been developed that supports improved target recognition by fusing target signature information obtained from sensor imagery [1], [2]. Image- based signature features, however, are not the only source of information that may be exploited to advantage by a target recognition system. This paper presents a review of the key features of the model-based fusion framework and shows how it can be expanded to support information derived from imaging sensors as well as data from other non-imaging sources. The expanded model-based multi-source framework supports not only the combination of image data, such as Synthetic Aperture Radar (SAR) and electro-optical (EO), but also various types of non-image data that may be derived from those, or other sensor measurements. The paper illustrates the flexibility of the model-based framework by describing the combination of spatial information from an imaging sensor with scattering characteristics derived from polarimetric phase history data. The multi-source fusion is achieved by relating signature features to specific structural elements on the 3-D target geometry. The 3-D model is used as a sensor neutral, view independent, common reference for the combination of multi-source information.
© (2006) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Wayne Williams, Eric Keydel, Sean McCarty, "Model-based multisource fusion for exploitation, classification, and recognition", Proc. SPIE 6235, Signal Processing, Sensor Fusion, and Target Recognition XV, 62350X (17 May 2006); doi: 10.1117/12.665309; https://doi.org/10.1117/12.665309


Back to Top