Paper
9 August 1988 Neural Network Approach To Sensory Fusion
John C Pearson, Jack J Gelfand, W E Sullivan, Richard M Peterson, Clay D Spence
Author Affiliations +
Abstract
We present a neural network model for sensory fusion based on the design of the visual/acoustic target localiza-tion system of the barn owl. This system adaptively fuses its separate visual and acoustic representations of object position into a single joint representation used for head orientation. The building block in this system, as in much of the brain, is the neuronal map. Neuronal maps are large arrays of locally interconnected neurons that represent information in a map-like form, that is, parameter values are systematically encoded by the position of neural activation in the array. The computational load is distributed to a hierarchy of maps, and the computation is performed in stages by transforming the representation from map to map via the geometry of the projections between the maps and the local interactions within the maps. For example, azimuthal position is computed from the frequency and binaural phase information encoded in the signals of the acoustic sensors, while elevation is computed in a separate stream using binaural intensity information. These separate streams are merged in their joint projection onto the external nucleus of the inferior colliculus, a two dimensional array of cells which contains a map of acoustic space. This acoustic map, and the visual map of the retina, jointly project onto the optic tectum, creating a fused visual/acoustic representation of position in space that is used for object localization. In this paper we describe our mathematical model of the stage of visual/acoustic fusion in the optic tectum. The model assumes that the acoustic projection from the external nucleus onto the tectum is roughly topographic and one-to-many, while the visual projection from the retina onto the tectum is topographic and one-to-one. A simple process of self-organization alters the strengths of the acoustic connections, effectively forming a focused beam of strong acoustic connections whose inputs are coincident with the visual inputs. Computer simulations demonstrate how this mechanism can account for the existing experimental data on adaptive fusion and makes sharp predictions for experimental test.
© (1988) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
John C Pearson, Jack J Gelfand, W E Sullivan, Richard M Peterson, and Clay D Spence "Neural Network Approach To Sensory Fusion", Proc. SPIE 0931, Sensor Fusion, (9 August 1988); https://doi.org/10.1117/12.946654
Lens.org Logo
CITATIONS
Cited by 32 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visualization

Acoustics

Retina

Sensor fusion

Brain mapping

Head

Sensors

Back to Top