29 May 2013 A cognitive approach to vision for a mobile robot
Author Affiliations +
Abstract
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
D. Paul Benjamin, Christopher Funk, Damian Lyons, "A cognitive approach to vision for a mobile robot", Proc. SPIE 8756, Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2013, 87560I (29 May 2013); doi: 10.1117/12.2018856; https://doi.org/10.1117/12.2018856
PROCEEDINGS
7 PAGES


SHARE
RELATED CONTENT

Effects of using a 3D model on the performance of...
Proceedings of SPIE (May 22 2015)
Progress in building a cognitive vision system
Proceedings of SPIE (August 03 2016)
Using a virtual world for robot planning
Proceedings of SPIE (May 10 2012)
Embodying a cognitive model in a mobile robot
Proceedings of SPIE (October 02 2006)

Back to Top