24 October 2005 Scene understanding and intelligent tactical behavior of mobile robots with perception system based on network-symbolic models
Author Affiliations +
Abstract
Intelligent tactical behaviors of robots and UGVs cannot be achieved without a perception system that is similar to human vision. The traditional linear bottom-up "segmentation-grouping-learning-recognition" approach to image processing and analysis cannot provide a reliable separation of an object from its background or clutter, while human vision unambiguously solves this problem. The nature of informational processes in the visual system does not allow separating from the informational processes in the top-level knowledge system. An Image/Video Analysis that is based on Network-Symbolic approach is a combination of recursive hierarchical bottom-up and top-down processes. Instead of precise computations of 3-dimensional models a Network-Symbolic system converts image information into an "understandable" Network-Symbolic format that is similar to the relational knowledge models. The logic of visual scenes can be captured in the Network-Symbolic models and used for the reliable disambiguation of visual information, including object detection and identification. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure and not the primary view is a subject for recognition. Such recognition is not affected by local changes and appearances of the object from a set of similar views, and a robot can better interpret images and video for intelligent tactical behavior.
© (2005) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Gary Kuvich, "Scene understanding and intelligent tactical behavior of mobile robots with perception system based on network-symbolic models", Proc. SPIE 6006, Intelligent Robots and Computer Vision XXIII: Algorithms, Techniques, and Active Vision, 60060T (24 October 2005); doi: 10.1117/12.630297; https://doi.org/10.1117/12.630297
PROCEEDINGS
13 PAGES


SHARE
Back to Top