Translator Disclaimer
10 May 2012 Spectral and spatial algorithm architecture for classification of hyperspectral and LIDAR
Author Affiliations +
Architecture for neural net multi-sensor data fusion is introduced and analyzed. This architecture consists of a set of independent sensor neural nets, one for each sensor, coupled to a fusion net. The neural net of each sensor is trained (from a representative data set of the particular sensor) to map to a hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. To begin the processing, the 3D point cloud LIDAR data is classified based on a multi-dimensional mean-shift segmentation and classification into clustered objects. Similarly, the multi-band HSI data is spectrally classified by the Stochastic Expectation-Maximization (SEM) into a classification map containing pixel classes. For sensor fusion, spatial detections and spectral detections complement each other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The first layer is the sensor level consisting of two neural nets: spatial neural net and spectral neural net. The second level consists of a single neural net, that is the fusion neural net. The success of the system in utilizing sensor synergism for an enhanced classification is clearly demonstrated by applying this architecture for classifying on November 2010 airborne data collection of LIDAR and HSI over the Gulfport, MS, area.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Robert S. Rand and Timothy S. Khuon "Spectral and spatial algorithm architecture for classification of hyperspectral and LIDAR", Proc. SPIE 8407, Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2012, 840702 (10 May 2012);

Back to Top