30 April 1992 Neural networks for distributed sensor data fusion: the Firefly experiment
Author Affiliations +
An intuitive architecture for neural net multisensor data fusion consists of a set of independent sensor neural nets, one for each sensor, coupled to a fusion net. Each sensor is trained from a representative data set of the particular sensor to map to an hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. In this paper the sensor fusion architecture is applied to an experiment involving the multisensor observation of object deployments during the recent Firefly launches. The deployments were measured simultaneously by X-band, CO2 laser, and L-band radars. The range-Doppler images from the X-band and CO2 laser radars were combined with a passive IR spectral simulation of the deployment to form the data inputs to the neural sensor fusion system. The network was trained to distinguish predeployment, deployment, and postdeployment phases of the launch based on the fusion of these sensors. The success of the system is utilizing sensor synergism for an enhanced deployment detection is clearly demonstrated.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Robert Y. Levine, Robert Y. Levine, Timothy S. Khuon, Timothy S. Khuon, } "Neural networks for distributed sensor data fusion: the Firefly experiment", Proc. SPIE 1611, Sensor Fusion IV: Control Paradigms and Data Structures, (30 April 1992); doi: 10.1117/12.57912; https://doi.org/10.1117/12.57912


Back to Top