Today’s robots require a great deal of control and supervision, and are unable to intelligently respond to unanticipated
and novel situations. Interactions between an operator and even a single robot take place exclusively at a very low,
detailed level, in part because no contextual information about a situation is conveyed or utilized to make the interaction
more effective and less time consuming. Moreover, the robot control and sensing systems do not learn from experience
and, therefore, do not become better with time or apply previous knowledge to new situations.
With multi-robot teams, human operators, in addition to managing the low-level details of navigation and sensor
management while operating single robots, are also required to manage inter-robot interactions. To make the most use
of robots in combat environments, it will be necessary to have the capability to assign them new missions (including
providing them context information), and to have them report information about the environment they encounter as they
proceed with their mission.
The Cognitive Patterns Knowledge Generation system (CPKG) has the ability to connect to various knowledge-based
models, multiple sensors, and to a human operator. The CPKG system comprises three major internal components:
Pattern Generation, Perception/Action, and Adaptation, enabling it to create situationally-relevant abstract patterns,
match sensory input to a suitable abstract pattern in a multilayered top-down/bottom-up fashion similar to the
mechanisms used for visual perception in the brain, and generate new abstract patterns. The CPKG allows the operator
to focus on things other than the operation of the robot(s).
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining
security in cleared buildings, and extending the team’s reconnaissance and persistent surveillance capability. In order
for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down
teleoperation which require intensive human attention and affect the human operator’s ability to maintain local
situational awareness and ensure their own safety.
This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates
naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant
interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly
select the most suitable interaction method given the situational demands. For instance, the human can silently use arm
and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface
provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight
conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator’s
clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture
recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we
designed the interactions around familiar arm and hand gestures.
One of the primary challenges facing the modern small-unit tactical team is the ability of the unit to safely and
effectively search, explore, clear and hold urbanized terrain that includes buildings, streets, and subterranean
dwellings. Buildings provide cover and concealment to an enemy and restrict the movement of forces while
diminishing their ability to engage the adversary. The use of robots has significant potential to reduce the risk to
tactical teams and dramatically force multiply the small unit's footprint. Despite advances in robotic mobility, sensing
capabilities, and human-robot interaction, the use of robots in room clearing operations remains nascent.
CHAMP is a software system in development that integrates with a team of robotic platforms to enable them to
coordinate with a human operator performing a search and pursuit task. In this way, the human operator can either give
control to the robots to search autonomously, or can retain control and direct the robots where needed. CHAMP's
autonomy is built upon a combination of adversarial pursuit algorithms and dynamic function allocation strategies that
maximize the team's resources. Multi-modal interaction with CHAMP is achieved using novel gesture-recognition
based capabilities to reduce the need for heads-down tele-operation. The Champ Coordination Algorithm addresses
dynamic and limited team sizes, generates a novel map of the area, and takes into account mission goals, user
preferences and team roles. In this paper we show results from preliminary simulated experiments and find that the
CHAMP system performs faster than traditional search and pursuit algorithms.
Autonomous vehicles driving on off-road terrain exhibit substantial variation in mobility characteristics even when the
terrain is horizontal and qualitatively homogeneous. This paper presents a simple stochastic model for characterizing
observed variability in vehicle response to terrain and for representing transitions between homogeneous terrain with
local variability or between heterogeneous terrain types. Such a model provides a means for more realistic evaluation of
terrain parameter estimation methods through simulation. A stochastic terrain model in which friction angle and soil
cohesion are represented by Gaussian random variables qualitatively represents observed variability in traction vs. slip
characteristics measured experimentally. The stochastic terrain model is used to evaluate a terrain parameter estimation
method in which terrain forces are first estimated independent of a terrain model, and subsequently, parameters of a
terrain model, such as soil cohesion, friction angle, and stress distribution parameters are determined from estimated
vehicle-terrain forces. Simulation results show drawbar pull vs. slip characteristics resulting from terrain parameter
estimation are within statistical bounds established by the stochastic terrain model.