Translator Disclaimer
1 April 1992 Hyperspectral data fusion for target detection using neural networks
Author Affiliations +
Multiple sensor imaging systems are changing the approach to the challenging problem of automatic target recognition (ATR). This paper summarizes a research effort to demonstrate the utility of neural networks in processing hyperspectral imagery for target detection and classification. Pixel registered imagery containing 32 spectral bands in the 2.0 to 2.5 mm range was used to train and test a backpropagation neural network for detection of camouflaged targets. An initial neural network was trained and tested using all 32 spectral bands resulting in a probability of correct classification (Pcc) at the pixel level of 98.7 percent. Because of the high degree of correlation between features (i.e., spectral bands), the dimensionality of the feature set was reduced to 11 spectral bands using a Karhunen-Loeve expansion. The neural network was reconfigured and retrained resulting in a Pcc of 99.8 percent. This second neural network was implemented in hardware on the Intel ETANN chip, a special purpose analog neural network chip resulting in a Pcc of 96.3 percent. A single ETANN chip is capable of classifying 400,000 pixels per second. The capability of classifying each individual pixel in a hyperspectral image in real time radically alters the possible approaches in an ATR scenario.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Joe R. Brown, Edward E. DeRouin, Hal E. Beck, and Susan J. Archer "Hyperspectral data fusion for target detection using neural networks", Proc. SPIE 1623, The 20th AIPR Workshop: Computer Vision Applications: Meeting the Challenges, (1 April 1992);

Back to Top