11 March 1993 Visually guided touching and manual tracking
Author Affiliations +
Abstract
Animate vision depends on an ability to choose a region of the visual environment for task-specific processing. This processing may involve extraction of image features for object classification or identification, or it may involve extraction of viewpoint parameters, such as position, scale, and orientation, for guiding movement. It is the role of selective attention to choose the region to be processed in a task-dependent way. This paper describes a real-time implementation of a vision-robotics system that uses the location information provided by the attention mechanism to guide eye movements and arm movements in touching and manual tracking behaviors. The approach makes use of a 3-D retinocentric coordinate frame for representing position information, and differential kinematics for relating the eye and arm motor systems to this retinocentric sensory frame.
© (1993) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Peter A. Sandon, "Visually guided touching and manual tracking", Proc. SPIE 1964, Applications of Artificial Intelligence 1993: Machine Vision and Robotics, (11 March 1993); doi: 10.1117/12.141774; https://doi.org/10.1117/12.141774
PROCEEDINGS
12 PAGES


SHARE
Back to Top