9 June 1986 A Unified Computational Architecture for Preprocessing Visual Information in Space and Time.
Author Affiliations +
Proceedings Volume 0595, Computer Vision for Robots; (1986) https://doi.org/10.1117/12.952268
Event: 1985 International Technical Symposium/Europe, 1985, Cannes, France
Abstract
The success of autonomous mobile robots depends on the ability to understand continuously changing scenery. Present techniques for analysis of images are not always suitable because in sequential paradigm, computation of visual functions based on absolute values of stimuli is inefficient. Important aspects of visual information are encoded in discontinuities of intensity, hence a representation in terms of relative values seems advantageous. We present the computing architecture of a massively parallel vision module which optimizes the detection of relative intensity changes in space and time. Visual information must remain constant despite variation in ambient light level or velocity of target and robot. Constancy can be achieved by normalizing motion and lightness scales. In both cases, basic computation involves a comparison of the center pixels with the context of surrounding values. Therefore, a similar computing architecture, composed of three functionally-different and hierarchically-arranged layers of overlapping operators, can be used for two integrated parts of the module. The first part maintains high sensitivity to spatial changes by reducing noise and normalizing the lightness scale. The result is used by the second part to maintain high sensitivity to temporal discontinuities and to compute relative motion information. Simulation results show that response of the module is proportional to contrast of the stimulus and remains constant over the whole domain of intensity. It is also proportional to velocity of motion limited to any small portion of the visual field. Uniform motion throughout the visual field results in constant response, independent of velocity. Spatial and temporal intensity changes are enhanced because computationally, the module resembles the behavior of a DOG function.
© (1986) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Josef Skrzypek, "A Unified Computational Architecture for Preprocessing Visual Information in Space and Time.", Proc. SPIE 0595, Computer Vision for Robots, (9 June 1986); doi: 10.1117/12.952268; https://doi.org/10.1117/12.952268
PROCEEDINGS
6 PAGES


SHARE
RELATED CONTENT

Pyramidal neurovision architecture for vision machines
Proceedings of SPIE (August 20 1993)
Multiresolution Temporal Preprocessing and Lateral Inhibition
Proceedings of SPIE (December 11 1985)
Spatio-temporal curvature measures for flow-field analysis
Proceedings of SPIE (September 01 1991)
Neurovision processor for designing intelligent sensors
Proceedings of SPIE (March 01 1992)
Interweaving reason, action, and perception
Proceedings of SPIE (November 01 1992)

Back to Top