21 May 2015 Learning representations for improved target identification, scene classification, and information fusion
Author Affiliations +
Abstract
Object representation is fundamental to Automated Target Recognition (ATR). Many ATR approaches choose a basis, such as a wavelet or Fourier basis, to represent the target. Recently, advancements in Image and Signal processing have shown that object recognition can be improved if, rather than a assuming a basis, a database of training examples is used to learn a representation. We discuss learning representations using Non-parametric Bayesian topic models, and demonstrate how to integrate information from other sources to improve ATR. We apply the method to EO and IR information integration for vehicle target identification and show that the learned representation of the joint EO and IR information improves target identification by 4%. Furthermore, we demonstrate that we can integrate text and imagery data to direct the representation for mission specific tasks and improve performance by 8%. Finally, we illustrate integrating graphical models into representation learning to improve performance by 2%.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Arjuna Flenner, Michael Culp, Ryan McGee, Jennifer Flenner, Cristina Garcia-Cardona, "Learning representations for improved target identification, scene classification, and information fusion", Proc. SPIE 9474, Signal Processing, Sensor/Information Fusion, and Target Recognition XXIV, 94740W (21 May 2015); doi: 10.1117/12.2176348; https://doi.org/10.1117/12.2176348
PROCEEDINGS
16 PAGES


SHARE
Back to Top