We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment,
including people, and uses the model to process perceptual information and to plan its movements. This paper describes
the structure of this architecture.
The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library
for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The
RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This
Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the
degree of detail required for the task.
As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference
component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and
notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction
Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's
actions. We report experimental results in indoor environments.