Autonomous exploration and mapping is a vital capability for future robotic systems expected to function in arbitrary
complex environments. In this paper, we describe an end-to-end robotic solution for remotely mapping buildings. For a
typical mapping system, an unmanned system is directed to enter an unknown building at a distance, sense the internal
structure, and, barring additional tasks, while in situ, create a 2-D map of the building. This map provides a useful and
intuitive representation of the environment for the remote operator. We have integrated a robust mapping and
exploration system utilizing laser range scanners and RGB-D cameras, and we demonstrate an exploration and metacognition
algorithm on a robotic platform. The algorithm allows the robot to safely navigate the building, explore the
interior, report significant features to the operator, and generate a consistent map - all while maintaining localization.
To support the missions and tasks of mixed robotic/human teams, future robotic systems will need to adapt to the
dynamic behavior of both teammates and opponents. One of the basic elements of this adaptation is the ability to exploit
both long and short-term temporal data. This adaptation allows robotic systems to predict/anticipate, as well as
influence, future behavior for both opponents and teammates and will afford the system the ability to adjust its own
behavior in order to optimize its ability to achieve the mission goals.
This work is a preliminary step in the effort to develop online entity behavior models through a combination of learning
techniques and observations. As knowledge is extracted from the system through sensor and temporal feedback, agents
within the multi-agent system attempt to develop and exploit a basic movement model of an opponent. For the purpose
of this work, extraction and exploitation is performed through the use of a discretized two-dimensional game. The game
consists of a predetermined number of sentries attempting to keep an unknown intruder agent from penetrating their
territory. The sentries utilize temporal data coupled with past opponent observations to hypothesize the probable
locations of the opponent and thus optimize their guarding locations.
This paper introduces a concept towards integrating manned and Unmanned Aircraft Systems (UASs) into a highly
functional team though the design and implementation of 3-D distributed formation/flight control algorithms with the
goal to act as wingmen for a manned aircraft. This method is designed to minimize user input for team control,
dynamically modify formations as required, utilize standard operating formations to reduce pilot resistance to
integration, and support splinter groups for surveillance and/or as safeguards between potential threats and manned
vehicles. The proposed work coordinates UAS members by utilizing artificial potential functions whose values are
based on the state of the unmanned and manned assets including the desired formation, obstacles, task assignments, and
perceived intentions. The overall unmanned team geometry is controlled using weighted potential fields. Individual
UAS utilize fuzzy logic controllers for stability and navigation as well as a fuzzy reasoning engine for flight path
intention prediction. Approaches are demonstrated in simulation using the commercial simulator X-Plane and
controllers designed in Matlab/Simulink. Experiments include trail and right echelon formations as well as splinter group
This paper addresses the problem of controlling and coordinating heterogeneous unmanned systems required to move as
a group while maintaining formation. We propose a strategy to coordinate groups of unmanned ground vehicles (UGVs)
with one or more unmanned aerial vehicles (UAVs). UAVs can be utilized in one of two ways: (1) as alpha robots to
guide the UGVs; and (2) as beta robots to surround the UGVs and adapt accordingly. In the first approach, the UAV
guides a swarm of UGVs controlling their overall formation. In the second approach, the UGVs guide the UAVs
controlling their formation. The unmanned systems are brought into a formation utilizing artificial potential fields
generated from normal and sigmoid functions. These functions control the overall swarm geometry. Nonlinear limiting
functions are defined to provide tighter swarm control by modifying and adjusting a set of control variables forcing the
swarm to behave according to set constraints. Formations derived are subsets of elliptical curves but can be generalized
to any curvilinear shape. Both approaches are demonstrated in simulation and experimentally. To demonstrate the
second approach in simulation, a swarm of forty UAVs is utilized in a convoy protection mission. As a convoy of UGVs
travels, UAVs dynamically and intelligently adapt their formation in order to protect the convoy of vehicles as it moves.
Experimental results are presented to demonstrate the approach using a fully autonomous group of three UGVs and a
single UAV helicopter for coordination.
One of the goals of the U.S. Army's Demo III robotics program is to develop individual and group behaviors that allow the robot to contribute to battlefield missions such as reconnaissance. Since experimental time on the actual robotic vehicle-referred to as the experimental unmanned ground vehicle (XUV) - is divided between many organizations, it is essential that we develop a simulation tool that will allow us to develop and test behaviors in simulation before porting them to the actual vehicle. In this work, we will describe a behavior development tool that incorporates robotic planning algorithms developed by the National Institutes of Standards and Technology (NIST) in the Modular Semi-Automated Forces (ModSAF) battlefield simulation tool. By combining the NIST planning algorithms with ModSAF, we can exercise the actual vehicle planning algorithms in a dynamic battlefield environment with a variety of entities and conditions to evaluate the behaviors we develop.