This PDF file contains the front matter associated with SPIE Proceedings Volume 8741, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
The multi-robot patrolling task has practical relevance in surveillance, search and rescue, and security appli-
cations. In this task, a team of robots must repeatedly visit areas in the environment, minimizing the time
in-between visits to each. A team of robots can perform this task efficiently; however, challenges remain related
to team formation and task assignment.
This paper presents an approach for monitoring patrolling performance and dynamically adjusting the task
assignment function based on observations of teammate performance. Experimental results are presented from
realistic simulations of a cooperative patrolling scenario, using a team of UAVs.
There have been several major advances in autonomous navigation for unmanned ground vehicles in controlled urban environments in recent years. However, off-road environments still pose several perception and classification challenges. This paper addresses two of these challenges: detection and classification of vegetation vs. man-made objects. In order for a vehicle or remote operator to traverse cross-country terrain, automated decisions must be made about obstacles in the vehicle's path. The most common obstacle is vegetation, but some vegetation may be traversable, depending on the size of the vehicle and the type of vegetation. However, man made objects should generally be detected and avoided in navigation. We present recent research towards the goal of vegetation and man-made object detection in the visible spectrum. First, we look at a state-of-the-art approach to image segmentation and image saliency using natural scene statistics. Then we apply recent work in multi-class image labeling to several images taken from a small unmanned ground vehicle (UGV). This work will attempt to highlight the recent advances and challenges that lie ahead in the ultimate goal of vegetation and man-made object detection and classification in the visual spectrum from UGV.
We present a robust method for landing zone selection using obstacle detection to be used for UAV emergency landings. The method is simple enough to allow real-time implementation on a UAV system. The method is able to detect objects in the presence of camera movement and motion parallax. Using the detected obstacles we select a safe landing zone for the UAV. The motion and structure detection uses background estimation of stabilized video. The background variation is measured and used to enhance the moving objects if necessary. In the motion and structure map a distance transform is calculated to find a suitable location for landing.
We have developed software that allows a micro-robot to localize itself at a 1Hz rate using only onboard hardware. The
Surveyor SRV-1 robot and its Blackfin processors were used to perform FAST feature detection on images. Good
features selected from these images were then described using the SURF descriptor algorithm. An onboard Gumstix then
correlated the features reported by the two processors and used GTSAM to develop an estimate of robot localization and
landmark positions. Localization errors in this system were on the same order of magnitude as the size of the robot itself,
giving the robot the potential to autonomously operate in a real-world environment.
Tactical situational awareness in unstructured and mixed indoor / outdoor scenarios is needed for urban combat as well as rescue operations. Two of the key functionalities needed by robot systems to function in an unknown environment are the ability to build a map of the environment and to determine its position within that map. In this paper, we present a strategy to build dense maps and to automatically close loops from 3D point clouds; this has been integrated into a mapping system dubbed OmniMapper. We will present both the underlying system, and experimental results from a variety of environments such as office buildings, at military training facilities and in large scale mixed indoor and outdoor environments.
In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple
advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge
of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of
mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is
partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow
for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into
hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach.
Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to
search for the target.
Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative
multi-agent search has developed many applications recently that would benefit from the use of the approach presented
in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the
increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is
simulated, analyzed, and advantages of this approach are presented and discussed.
Today’s robots require a great deal of control and supervision, and are unable to intelligently respond to unanticipated
and novel situations. Interactions between an operator and even a single robot take place exclusively at a very low,
detailed level, in part because no contextual information about a situation is conveyed or utilized to make the interaction
more effective and less time consuming. Moreover, the robot control and sensing systems do not learn from experience
and, therefore, do not become better with time or apply previous knowledge to new situations.
With multi-robot teams, human operators, in addition to managing the low-level details of navigation and sensor
management while operating single robots, are also required to manage inter-robot interactions. To make the most use
of robots in combat environments, it will be necessary to have the capability to assign them new missions (including
providing them context information), and to have them report information about the environment they encounter as they
proceed with their mission.
The Cognitive Patterns Knowledge Generation system (CPKG) has the ability to connect to various knowledge-based
models, multiple sensors, and to a human operator. The CPKG system comprises three major internal components:
Pattern Generation, Perception/Action, and Adaptation, enabling it to create situationally-relevant abstract patterns,
match sensory input to a suitable abstract pattern in a multilayered top-down/bottom-up fashion similar to the
mechanisms used for visual perception in the brain, and generate new abstract patterns. The CPKG allows the operator
to focus on things other than the operation of the robot(s).
We are developing a complete, self-contained autonomous navigation system for mobile robots that learns quickly, uses commodity components, and has the added benefit of emitting no radiation signature. It builds on the
autonomous navigation technology developed by Net-Scale and New York University during the Defense Advanced Research Projects Agency (DARPA) Learning Applied to Ground Robots (LAGR) program and takes advantage of recent scientific advancements achieved during the DARPA Deep Learning program. In this paper we will present our approach and algorithms, show results from our vision system, discuss lessons learned from the past, and present our plans for further advancing vehicle autonomy.
Mobile robots are already widely used by first responders both in civilian and military operations. Our current goal is to provide the human team with all the information available from an unknown environment quickly and accurate. Also, the robots need to explore autonomous because tele-operating more than two robots is very difficult and demands one person per robot to do it.
In this paper the authors describe the results of several experiments on behalf of the MAST CTA. Our exploration strategies developed for the experiments use from two to nine robots which sharing information are able to explore and map an unknown environment. Each robot has a local map of the environment and transmit the measurements information to a central computer which fusion all the data to make a global map. This computer called map coordinator send exploration goals to the robot teams in order to explore the environment in the fastest way available. The performance of our exploration strategies were evaluated in different scenarios and tested in two different mobile robot platforms.
Simultaneous Localization and Mapping (SLAM) is a fundamental problem of the autonomous systems in GPS (Global
Navigation System) denied environments. The traditional probabilistic SLAM methods uses point features as landmarks
and hold all the feature positions in their state vector in addition to the robot pose. The bottleneck of the point-feature
based SLAM methods is the data association problem, which are mostly based on a statistical measure. The data
association performance is very critical for a robust SLAM method since all the filtering strategies are applied after a
known correspondence. For point-features, two different but very close landmarks in the same scene might be confused
while giving the correspondence decision when their positions and error covariance matrix are solely taking into account.
Instead of using the point features, planar features can be considered as an alternative landmark model in the SLAM
problem to be able to provide a more consistent data association. Planes contain rich information for the solution of the
data association problem and can be distinguished easily with respect to point features. In addition, planar maps are very
compact since an environment has only very limited number of planar structures. The planar features does not have to be
large structures like building wall or roofs; the small plane segments can also be used as landmarks like billboards, traffic
posts and some part of the bridges in urban areas. In this paper, a probabilistic plane-feature extraction method from 3DLiDAR
data and the data association based on the extracted semantic information of the planar features is introduced.
The experimental results show that the semantic data association provides very satisfactory result in outdoor 6D-SLAM.
While many leader-follower technologies for robotic mules have been developed in recent years, the problem of reliably
tracking and re-acquiring a human leader through cluttered environments continues to pose a challenge to widespread
acceptance of these systems. Recent approaches to leader tracking rely on leader-worn equipment that may be damaged,
hidden from view, or lost, such as radio transmitters or special clothing, as well as specialized sensing hardware such as
high-resolution LIDAR. We present a vision-based approach for robustly tracking a leader using a simple monocular
camera. The proposed method requires no modification to the leader’s equipment, nor any specialized sensors on board
the host platform. The system learns a discriminative model of the leader’s appearance to robustly track him or her
through long occlusions, changing lighting conditions, and cluttered environments. We demonstrate the system’s
tracking capabilities on publicly available benchmark datasets, as well as in representative scenarios captured using a
small unmanned ground vehicle (SUGV).
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining
security in cleared buildings, and extending the team’s reconnaissance and persistent surveillance capability. In order
for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down
teleoperation which require intensive human attention and affect the human operator’s ability to maintain local
situational awareness and ensure their own safety.
This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates
naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant
interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly
select the most suitable interaction method given the situational demands. For instance, the human can silently use arm
and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface
provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight
conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator’s
clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture
recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we
designed the interactions around familiar arm and hand gestures.
Thousands of small UAVs are in active use by the US military and are generally operated by trained but not necessarily skilled personnel. The user interfaces for these devices often seem to be more engineering-focused than usability-focused, which can lead to operator frustration, poor mission effectiveness, reduced situational awareness, and sometimes loss of the vehicle. In addition, coordinated control of both air and ground vehicles is a frequently desired objective, usually with the intent of increasing situational awareness for the ground vehicle. The Space and Naval Warfare Systems Center Pacific (SSCPAC) is working under a Naval Innovative Science and Engineering project to address these topics. The UAS currently targeted are the Raven/Puma/Wasp family of air vehicles as they are small, all share the same communications protocol, and are in wide-spread use. The stock ground control station (GCS) consists of a hand control unit, radio, interconnect hub, and laptop. The system has been simplified to an X-box controller, radio and a laptop, resulting in a smaller hardware footprint, but most importantly the number of personnel required to operate the system has been reduced from two to one. The stock displays, including video with text overlay on one and FalconView on the other, are replaced with a single, graphics-based, integrated user interface, providing the user with much improved situational awareness. The SSCPAC government-developed GCS (the Multi-robot Operator Control Unit) already has the ability to control ground robots and this is leveraged to realize simultaneous multi-vehicle operations including autonomous UAV over-watch for enhanced UGV situational awareness.
Currently fielded small unmanned ground vehicles (SUGVs) are operated via teleoperation. This method of operation
requires a high level of operator involvement within, or near within, line of sight of the robot. As advances are made in
autonomy algorithms, capabilities such as automated mapping can be developed to allow SUGVs to be used to provide
situational awareness with an increased standoff distance while simultaneously reducing operator involvement.
In order to realize these goals, it is paramount the data produced by the robot is not only accurate, but also presented in
an intuitive manner to the robot operator. The focus of this paper is how to effectively present map data produced by a
SUGV in order to drive the design of a future user interface. The effectiveness of several 2D and 3D mapping
capabilities was evaluated by presenting a collection of pre-recorded data sets of a SUGV mapping a building in an
urban environment to a user panel of Soldiers. The data sets were presented to each Soldier in several different formats
to evaluate multiple factors, including update frequency and presentation style. Once all of the data sets were presented,
a survey was administered. The questions in the survey were designed to gauge the overall usefulness of the mapping
algorithm presentations as an information generating tool. This paper presents the development of this test protocol
along with the results of the survey.
Providing long-distance non-line-of-sight control for unmanned ground robots has long been recognized as a problem,
considering the nature of the required high-bandwidth radio links. In the early 2000s, the DARPA Mobile Autonomous
Robot Software (MARS) program funded the Space and Naval Warfare Systems Center (SSC) Pacific to demonstrate a
capability for autonomous mobile communication relaying on a number of Pioneer laboratory robots. This effort also
resulted in the development of ad hoc networking radios and software that were later leveraged in the development of a
more practical and logistically simpler system, the Automatically Deployed Communication Relays (ADCR). Funded by
the Joint Ground Robotics Enterprise and internally by SSC Pacific, several generations of ADCR systems introduced
increasingly more capable hardware and software for automatic maintenance of communication links through
deployment of static relay nodes from mobile robots. This capability was finally tapped in 2010 to fulfill an urgent need
from theater. 243 kits of ruggedized, robot-deployable communication relays were produced and sent to Afghanistan to
extend the range of EOD and tactical ground robots in 2012. This paper provides a summary of the evolution of the
radio relay technology at SSC Pacific, and then focuses on the latest two stages, the Manually-Deployed Communication
Relays and the latest effort to automate the deployment of these ruggedized and fielded relay nodes.
Of great interest to police and military organizations is the development of effective improvised explosive device (IED) disposal (IEDD) technology to aid in activities such as mine field clearing, and bomb disposal. At the same time minimizing risk to personnel. This paper presents new results in the research and development of a next generation mobile immersive teleoperated explosive ordnance disposal system. This system incorporates elements of 3D vision, multilateral teleoperation for high transparency haptic feedback, immersive augmented reality operator control interfaces, and a realistic hardware-in-the-loop (HIL) 3D simulation environment incorporating vehicle and manipulator dynamics for both operator training and algorithm development. In the past year, new algorithms have been developed to facilitate incorporating commercial off-the-shelf (COTS) robotic hardware into the teleoperation system. In particular, a real-time numerical inverse position kinematics algorithm that can be applied to a wide range of manipulators has been implemented, an inertial measurement unit (IMU) attitude stabilization system for manipulators has been developed and experimentally validated, and a voiceoperated manipulator control system has been developed and integrated into the operator control station. The integration of these components into a vehicle simulation environment with half-car vehicle dynamics has also been successfully carried out. A physical half-car plant is currently being constructed for HIL integration with the simulation environment.
We consider a Micro-Aerial Vehicle (MAV), used as a mobile sensor node, in conjunction with static sensor nodes, in a
mission of detection and localization of a hidden Electromagnetic (EM) emitter. This paper provides algorithms for the
MAV control under the Position-Adaptive Direction Finding (PADF) concept. The MAV avoids obstructions or
locations that may disrupt the EM propagation of the emitter, hence reducing the accuracy of the receivers’ combined
emitter location estimation. Given the cross Path Loss Exponents (PLEs) between the static and mobile node, we
propose a cost function for the MAV’s position adjustments that is based on the combination of cross PLEs and
Received Signal Strength Indicators (RSSI). The mobile node adjusts current position by minimizing a quadratic cost
function such that the PLE of surrounding receivers is decreased while increasing RSSI from the mobile node to the
target, thereby, reducing the inconsistency of the environment created by echo and multipath disturbances. In the
process, the MAV finds a more uniform measuring environment that increases localization accuracy. We propose to
embed this capability and functionality into MAV control algorithms.
This work presents a game theory-based consensus problem for leaderless multi-agent systems in the presence of
adversarial inputs that are introducing disturbance to the dynamics. Given the presence of enemy components
and the possibility of malicious cyber attacks compromising the security of networked teams, a position agreement
must be reached by the networked mobile team based on environmental changes. The problem is addressed under
a distributed decision making framework that is robust to possible cyber attacks, which has an advantage over
centralized decision making in the sense that a decision maker is not required to access information from all the
other decision makers. The proposed framework derives three tuning laws for every agent; one associated with
the cost, one associated with the controller, and one with the adversarial input.
In heterogeneous battlefield teams, the balance between team and individual objectives forms the
basis for the internal topological structure of teams. The stability of team structure is studied by
presenting a graphical coalitional game (GCG) with Positional Advantage (PA). PA is Shapley
value strengthened by the Axioms of value. The notion of team and individual objectives is
studied by defining altruistic and competitive contribution made by an individual; altruistic and
competitive contributions made by an agent are components of its total or marginal contribution.
Moreover, the paper examines dynamic team effects by defining three online sequential decision
games based on marginal, competitive and altruistic contributions of the individuals towards
team. The stable graphs under these sequential decision games are studied and found to be
structurally connected, complete, or tree respectively.
In this paper, adaptive consensus based formation control scheme is derived for mobile robots in a pre-defined
formation when full dynamics of the robots which include inertia, Corolis, and friction vector are considered. It is
shown that dynamic uncertainties of robots can make overall formation unstable when traditional consensus
scheme is utilized. In order to estimate the affine nonlinear robot dynamics, a NN based adaptive scheme is
utilized. In addition to this adaptive feedback control input, an additional control input is introduced based on
the consensus approach to make the robots keep their desired formation. Subsequently, the outer consensus loop
is redesigned for reduced communication. Lyapunov theory is used to show the stability of overall system.
Simulation results are included at the end.
In this paper, we are interested in exploiting the heterogeneity of a robotic network made of ground and aerial agents to sense multiple targets in a cluttered environment. Maintaining wireless communication on this type of networks is fundamentally important specially for cooperative purposes. The proposed heterogeneous network consists of ground sensors, e.g., OctoRoACHes, and aerial routers, e.g., quadrotors. Adaptive potential field methods are used to coordinate the ground mobile sensors. Moreover, a reward function for the aerial mobile wireless routers is formulated to guarantee communication coverage among the ground sensors and a fixed base station. A sub-optimal controller is proposed based on an approximate control policy iteration technique. Simulation results of a case study are presented to illustrate the proposed methodology.
The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move
beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model,
which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It
joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using
traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these
disciplines. The Common World Model must understand how these objects relate to each other. Our world model
includes the concept of Self-Information about the robot. By encoding current capability, component status, task
execution state, and histories we track information which enables the robot to reason and adapt its performance using
Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment
behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation
approach to the world model. We discuss the design of “Phase 1” of this world model, and interfaces by tracing
perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We
close with lessons learned from implementation and how the design relates to Open Architecture.
How does a robot know when something goes wrong? Our research answers this question by leveraging expectations - predictions about the immediate future – and using the mismatch between the expectations and the external world to
monitor the robot’s progress. We use the cognitive architecture ACT-R (Adaptive Control of Thought - Rational) to
learn the associations between the current state of the robot and the world, the action to be performed in the world, and
the future state of the world. These associations are used to generate expectations that are then matched by the
architecture with the next state of the world. A significant mismatch between these expectations and the actual state of
the world indicate a problem possibly resulting from unexpected consequences of the robot’s actions, unforeseen
changes in the environment or unanticipated actions of other agents. When a problem is detected, the recovery model
can suggest a number of recovery options. If the situation is unknown, that is, the mismatch between expectations and
the world is novel, the robot can use a recovery solution from a set of heuristic options. When a recovery option is
successfully applied, the robot learns to associate that recovery option with the mismatch. When the same problem is
encountered later, the robot can apply the learned recovery solution rather than using the heuristics or randomly
exploring the space of recovery solutions. We present results from execution monitoring and recovery performed during
an assessment conducted at the Combined Arms Collective Training Facility (CACTF) at Fort Indiantown Gap.
Terrain identification is a key enabling ability for generating terrain adaptive behaviors that assist both robot planning and motor control. This paper considers running legged robots from the RHex family) which the military plans to use in the field to assist troops in reconnaissance tasks. Important terrain adaptive behaviors include the selection of gaits) modulation of leg stiffness) and alteration of steering control laws that minimize slippage) maximize speed and/or reduce energy consumption. These terrain adaptive behaviors can be enabled by a terrain identification methodology that combines proprioceptive sensors already available in RHex-type robots. The proposed classification approach is based on the characteristic frequency signatures of data from leg observers) which combine current sensing with a dynamic model of the leg motion. The paper analyzes the classification accuracy obtained using both a single leg and groups of legs (through a voting scheme) on different terrains such as vinyl) asphalt) grass) and pebbles. Additionally) it presents a terrain classifier that works across various gait speeds and in fact almost as good as an overly specialized classifier.
We describe an architecture to provide online semantic labeling capabilities to field robots operating in urban environments. At the core of our system is the stacked hierarchical classifier developed by Munoz et al., which classifies regions in monocular color images using models derived from hand labeled training data. The classifier is trained to identify buildings, several kinds of hard surfaces, grass, trees, and sky. When taking this algorithm into the real world, practical concerns with difficult and varying lighting conditions require careful control of the imaging process. First, camera exposure is controlled by software, examining all of the image's pixels, to compensate for the poorly performing, simplistic algorithm used on the camera. Second, by merging multiple images taken with different exposure times, we are able to synthesize images with higher dynamic range than the ones produced by the sensor itself. The sensor 's limited dynamic range makes it difficult to, at the same time, properly expose areas in shadow along with high albedo surfaces that are directly illuminated by the sun. Texture is a key feature used by the classifier, and under /over exposed regions lacking texture are a leading cause of misclassifications. The results of the classifier are shared with higher-lev elements operating in the UGV in order to perform tasks such as building identification from a distance and finding traversable surfaces.
For search and rescue robots and reconnaissance robots it is important to detect objects in their vicinity. We have
developed a scanning laser line striper that can produce dense 3D images using active illumination. The scanner consists
of a camera and a MEMS-micro mirror based projector. It can also detect the presence of optically difficult material like
glass and metal. The sensor can be used for autonomous operation or it can help a human operator to better remotely
control the robot. In this paper we will evaluate the performance of the scanner under outdoor illumination, i.e. from
operating in the shade to operating in full sunlight. We report the range, resolution and accuracy of the sensor and its
ability to reconstruct objects like grass, wooden blocks, wires, metal objects, electronic devices like cell phones, blank
RPG, and other inert explosive devices. Furthermore we evaluate its ability to detect the presence of glass and polished
metal objects. Lastly we report on a user study that shows a significant improvement in a grasping task. The user is
tasked with grasping a wire with the remotely controlled hand of a robot. We compare the time it takes to complete the
task using the 3D scanner with using a traditional video camera.
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust
communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and
levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual
signaling using arm and hand gestures is a natural method of communication between people. Visual signals
standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for
human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of
arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer
vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable
of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots
necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans
typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused
modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile
display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes
ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification
accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
The creation of dynamic manipulation behaviors for high degree of freedom, mobile robots will allow them to
accomplish increasingly difficult tasks in the field. We are investigating how the coordinated use of the body, legs, and
integrated manipulator, on a mobile robot, can improve the strength, velocity, and workspace when handling heavy
objects. We envision that such a capability would aid in a search and rescue scenario when clearing obstacles from a
path or searching a rubble pile quickly. Manipulating heavy objects is especially challenging because the dynamic forces
are high and a legged system must coordinate all its degrees of freedom to accomplish tasks while maintaining balance.
To accomplish these types of manipulation tasks, we use trajectory optimization techniques to generate feasible open-loop
behaviors for our 28 dof quadruped robot (BigDog) by planning trajectories in a 13 dimensional space. We apply
the Covariance Matrix Adaptation (CMA) algorithm to solve for trajectories that optimize task performance while also
obeying important constraints such as torque and velocity limits, kinematic limits, and center of pressure location. These
open-loop behaviors are then used to generate desired feed-forward body forces and foot step locations, which enable
tracking on the robot. Some hardware results for cinderblock throwing are demonstrated on the BigDog quadruped
platform augmented with a human-arm-like manipulator. The results are analogous to how a human athlete maximizes
distance in the discus event by performing a precise sequence of choreographed steps.
We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped
with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and
engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and
also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We
detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally
prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at
capturing our robot’s “body energy” during real time operation as a means of quantifying its potential for agile behavior.
Finally, we present joint motor, inertial and motion capture data taken from Canid’s initial leaps into highly energetic
regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform
for developing efficient, stable, hence useful bounding gaits.
Autonomous systems operating in militarily-relevant environments are valuable assets due to the increased situational
awareness they provide to the Warfighter. To further advance the current state of these systems, a collaborative
experiment was conducted as part of the Safe Operations of Unmanned Systems for Reconnaissance in Complex
Environments (SOURCE) Army Technology Objective (ATO). We present the findings from this large-scale experiment
which spanned several research areas, including 3D mapping and exploration, communications maintenance, and visual
For 3D mapping and exploration, we evaluated loop closure using Iterative Closest Point (ICP). To improve current
communications systems, the limitations of an existing mesh network were analyzed. Also, camera data from a
Microsoft Kinect was used to test autonomous stairway detection and modeling algorithms. This paper will detail the
experiment procedure and the preliminary results for each of these tests.
The Cargo UGV project was initiated in 2010 with the aim of developing and experimenting with advanced autonomous
vehicles capable of being integrated unobtrusively into manned logistics convoys. The intent was to validate two
hypotheses in complex, operationally representative environments: first, that unmanned tactical wheeled vehicles
provide a force protection advantage by creating standoff distance to warfighters during ambushes or improvised
explosive device attacks; and second, that these UGVs serve as force multipliers by enabling a single operator to control
multiple unmanned assets.
To assess whether current state-of-the-art autonomous vehicle technology was sufficiently capable to permit resupply
missions to be executed with decreased risk and reduced manpower, and to assess the effect of UGVs on customary
convoy tactics, the Marine Corps Warfighting Laboratory and the Joint Ground Robotics Enterprise sponsored Oshkosh
Defense and the National Robotics Engineering Center to equip two standard Marine Corps cargo trucks for autonomous
This paper details the system architecture, hardware implementation, and software modules developed to meet the
vehicle control, perception, and planner requirements compelled by this application. Additionally, the design of a
custom human machine interface and an accompanying training program are described, as is the creation of a realistic
convoy simulation environment for rapid system development.
Finally, results are conveyed from a warfighter experiment in which the effectiveness of the training program for novice
operators was assessed, and the impact of the UGVs on convoy operations was observed in a variety of scenarios via
direct comparison to a fully manned convoy.
Unmanned Ground vehicles (UGVs) are becoming prolific in the heterogeneous superset of robotic platforms. The
sensors which provide odometry, localization, perception, and vehicle diagnostics are fused to give the robotic platform
a sense of the environment it is traversing. The automotive industry CAN bus has dominated the industry due to the
fault tolerance and the message structure allowing high priority messages to reach the desired node in a real time
environment. UGVs are being researched and produced at an accelerated rate to preform arduous, repetitive, and
dangerous missions that are associated with a military action in a protracted conflict. The technology and applications of
the research will inevitably be turned into dual-use platforms to aid civil agencies in the performance of their various
operations. Our motivation is security of the holistic system; however as subsystems are outsourced in the design, the
overall security of the system may be diminished. We will focus on the CAN bus topology and the vulnerabilities
introduced in UGVs and recognizable security vulnerabilities that are inherent in the communications architecture. We
will show how data can be extracted from an add-on CAN bus that can be customized to monitor subsystems. The
information can be altered or spoofed to force the vehicle to exhibit unwanted actions or render the UGV unusable for
the designed mission. The military relies heavily on technology to maintain information dominance, and the security of
the information introduced onto the network by UGVs must be safeguarded from vulnerabilities that can be exploited.
Over the course of the last few years, the Robot Operating System (ROS) has become a highly popular software
framework for robotics research. ROS has a very active developer community and is widely used for robotics research in
both academia and government labs. The prevalence and modularity of ROS cause many people to ask the question:
“What prevents ROS from being used in commercial or government applications?” One of the main problems that is
preventing this increased use of ROS in these applications is the question of characterizing its security (or lack thereof).
In the summer of 2012, a crowd sourced cyber-physical security contest was launched at the cyber security conference
DEF CON 20 to begin the process of characterizing the security of ROS. A small-scale, car-like robot was configured as
a cyber-physical security “honeypot” running ROS. DEFFCON-20 attendees were invited to find exploits and
vulnerabilities in the robot while network traffic was collected. The results of this experiment provided some interesting
insights and opened up many security questions pertaining to deployed robotic systems. The Federal Aviation
Administration is tasked with opening up the civil airspace to commercial drones by September 2015 and driverless cars
are already legal for research purposes in a number of states. Given the integration of these robotic devices into our daily
lives, the authors pose the following question: “What security exploits can a motivated person with little-to-no
experience in cyber security execute, given the wide availability of free cyber security penetration testing tools such as
Metasploit?” This research focuses on applying common, low-cost, low-overhead, cyber-attacks on a robot featuring
ROS. This work documents the effectiveness of those attacks.
A simple, quantitative measure for encapsulating the autonomous capabilities of unmanned systems (UMS) has
yet to be established. Current models for measuring a UMS’s autonomy level require extensive, operational
level testing, and provide a means for assessing the autonomy level for a specific mission/task and operational
environment. A more elegant technique for quantifying autonomy using component level testing of the robot
platform alone, outside of mission and environment contexts, is desirable. Using a high level framework for
UMS architectures, such a model for determining a level of autonomy has been developed. The model uses a
combination of developmental and component level testing for each aspect of the UMS architecture to define a
non-contextual autonomous potential (NCAP). The NCAP provides an autonomy level, ranging from fully non-
autonomous to fully autonomous, in the form of a single numeric parameter describing the UMS’s performance
capabilities when operating at that level of autonomy.
This article presents a comparative study between a well-known SLAM (Simultaneous Localization and Mapping)
algorithm, called Gmapping, and a standard Dead-Reckoning algorithm; the study is based on experimental results
of both approaches by using a commercial skid-based turning robot, P3DX. Five main base-case scenarios are
conducted to evaluate and test the effectiveness of both algorithms. The results show that SLAM outperformed the
Dead Reckoning in terms of map-making accuracy in all scenarios but one, since SLAM did not work well in a
rapidly changing environment. Although the main conclusion about the excellence of SLAM is not surprising, the
presented test method is valuable to professionals working in this area of mobile robots, as it is highly practical, and
provides solid and valuable results. The novelty of this study lies in its simplicity. The simple but novel test method
for quantitatively comparing robot mapping algorithms using SLAM and Dead Reckoning and some applications
using autonomous robots are being patented by the authors in U.S. Patent Application Nos. 13/400,726 and
The limited power generation capability of a small satellite (e.g., a CubeSat) requires robust scheduling. A scheduling
approach for small satellites which considers subsystem inter-dependency (where co-operation is required, desirable or
prohibited), operational requirements and ground communication windows is presented. The paper considers what the
optimal way of scheduled tasks for autonomous operation (required for scheduling when not in communication with
ground controllers and desirable at all times during the mission) is. It compares a genetic algorithm-based approach, an
exhaustive search-based approach and a heuristic-based approach. Performance maximization is considered (in light of
both decision-making time and reducing activity time).
This study explained a method to classify humans and trees by extraction their geometric and statistical features in data
obtained from 3D LADAR. In a wooded GPS-denied environment, it is difficult to identify the location of unmanned
ground vehicles and it is also difficult to properly recognize the environment in which these vehicles move. In this study,
using the point cloud data obtained via 3D LADAR, a method to extract the features of humans, trees, and other objects
within an environment was implemented and verified through the processes of segmentation, feature extraction, and
classification. First, for the segmentation, the radially bounded nearest neighbor method was applied. Second, for the
feature extraction, each segmented object was divided into three parts, and then their geometrical and statistical features
were extracted. A human was divided into three parts: the head, trunk and legs. A tree was also divided into three parts:
the top, middle, and bottom. The geometric features were the variance of the x-y data for the center of each part in an
object, using the distance between the two central points for each part, using K-mean clustering. The statistical features
were the variance of each of the parts. In this study, three, six and six features of data were extracted, respectively,
resulting in a total of 15 features. Finally, after training the extracted data via an artificial network, new data were
classified. This study showed the results of an experiment that applied an algorithm proposed with a vehicle equipped
with 3D LADAR in a thickly forested area, which is a GPS-denied environment. A total of 5,158 segments were
obtained and the classification rates for human and trees were 82.9% and 87.4%, respectively.
Multiple industries, from defense to medical, are increasing their use of unmanned systems. Today, many of
these systems are rapidly designed, tested, and deployed without adequate security testing. To aid the quick turnaround,
commercially available subsystems and embedded components are often used. These components may introduce
security vulnerabilities particularly if the designers do not fully understand their functionality and limitations. There is a
need for thorough testing of unmanned systems for security vulnerabilities, which includes all subsystems. Using a
penetration testing framework would help find these vulnerabilities across different unmanned systems applications. The
framework should encompass all of the commonly implemented subsystems including, but not limited to, wireless
networks, CAN buses, passive and active sensors, positioning receivers, and data storage devices. Potential attacks and
vulnerabilities can be identified by looking at the unique characteristics of these subsystems. The framework will clearly
outline the attack vectors as they relate to each subsystem. If any vulnerabilities exist, a mitigation plan can be developed
prior to the completion of the design phase. Additionally, if the vulnerabilities are known in advance of deployment,
monitoring can be added to the design to alert operators of any attempted or successful attacks. This proposed
framework will help evaluate security risks quickly and consistently to ensure new unmanned systems are ready for
deployment. Verifying that a new unmanned system has passed a comprehensive security evaluation will ensure greater
confidence in its operational effectiveness.