Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including
communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the
recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to
develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training
environments and interactive computer games. For development and testing in robotic virtual environments, Soar
interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to
Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should
significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory,
reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling
of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system
to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic
and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems
and the need for cognitive robotics to support dynamic instruction and taskability.
Robotic systems deployed in space must exhibit flexibility. In particular, an intelligent robotic agent should not have to be reprogrammed for each of the various tasks it may face during the course of its lifetime. However, pre-programming knowledge for all of the possible tasks that may be needed is extremely difficult. Therefore, a powerful notion is that of an instructible agent, one which is able to receive task-level instructions and advice from a human advisor. An agent must do more than simply memorize the instructions it is given (this would amount to programming). Rather, after mapping instructions into task constructs that it can reason with, it must determine each instruction's proper scope of applicability. In this paper, we examine the characteristics of instruction, and the characteristics of agents, that affect learning from instruction. We find that in addition to a myriad of linguistic concerns, both the situatedness of the instructions (their placement within the ongoing execution of tasks) and the prior domain knowledge of the agent have an impact on what can be learned.