2 September 2004 Active vision and image/video understanding systems for UGV based on network-symbolic models
Author Affiliations +
Abstract
Vision evolved as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it has become a vital component of situation awareness, navigation and planning systems. Vision is part of a larger information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. It is hard to split such a system apart. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for natural processing of visual information. It converts visual information into relational Network-Symbolic models, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in such models and used for disambiguation of visual information. Network-Symbolic transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps create unambiguous network-symbolic models. This approach is consistent with NIST RCS. The UGV, equipped with such smart vision, will be able to plan path and navigate in a real environment, perceive and understand complex real-world situations and act accordingly.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Gary Kuvich, "Active vision and image/video understanding systems for UGV based on network-symbolic models", Proc. SPIE 5422, Unmanned Ground Vehicle Technology VI, (2 September 2004); doi: 10.1117/12.541266; https://doi.org/10.1117/12.541266
PROCEEDINGS
12 PAGES


SHARE
Back to Top