PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6561, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents preliminary results on an intelligent mapping capability that generates high-confidence layouts of
building interiors from limited exterior and interior structural observables. Such a capability could provide critical
support to intelligence gathering and/or life-saving operations in the military, law enforcement, disaster response, and
commercial sectors where rapid understanding of unknown interior layouts is required to save lives, maintain covertness,
or minimize costs. The fundamental approach relies on an intelligent rule-based inferencing process which operates on
limited structural observables. The rules are based on geo-specific design practices and building codes. The mapping
capability has been demonstrated experimentally in a test-case building structure using a set of imaging sensors to scan
the environment. Data was gathered at sparse locations within the building to represent a real-world rapid exploration
scenario. Preliminary results show that this capability can successfully generate high-confidence interior layouts from
limited structural observables. It is envisioned that this intelligent mapping capability could be integrated on unmanned
ground vehicle platforms or in human-borne (soldier, SWAT, urban search-and-rescue personnel) systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting negative obstacles (ditches, holes, wadis, and other depressions) is one of the most difficult problems in
perception for unmanned ground vehicle (UGV) off-road autonomous navigation. One reason for this is that the width
of the visible portion of a negative obstacle may only span a few pixels at the stopping distance for vehicle speeds UGV
programs aspire to operate at (up to 50kph). The problem can be further compounded when negative obstacles are
obscured by vegetation or when negative obstacles are embedded in undulating terrain. Because of the variety of
appearances of negative obstacles, a multi-cue detection approach is desired. In previous nighttime negative obstacle
detection work, we have described combining geometry based cues from stereo range data and a thermal signature based
cue from thermal infrared imagery. Thermal signature is a powerful cue during the night since the interiors of negative
obstacles generally remain warmer than surrounding terrain throughout the night. In this paper, we further couple the
thermal signature based cue and geometry based cues from stereo range data for nighttime negative obstacle detection.
Edge detection is used to generate closed contour candidate negative obstacle regions that are geometrically filtered to
determine if they lie within the ground plane. Cues for negative obstacles from thermal signature, geometry-based
analysis of range images, and geometry-based analysis of terrain maps are fused. The focus of this work is to increase
the range at which UGVs can reliably detect negative obstacles on cross-country terrain, thereby increasing the speed at
which UGVs can safely operate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a system for automatically detecting potential targets (that pop-up or move into view)
and to cue the operator to potential threats. Detection of independently moving targets from a moving ground
vehicle is challenging due to the strong parallax effects caused by the camera motion close to the 3D structure in
the environment. We present a 3D approach for detecting and tracking such independently moving targets with
multiple monocular cameras. In our approach, we first recover the camera position and orientation by employing a
visual odometry method. Next, using multiple consecutive frames with the estimated camera poses, the structure
of the scene at the reference frame is explicitly recovered by a motion stereo approach, and corresponding optical
flow fields between the reference frame and other frames are also estimated. Third, an advanced filter is designed
by combining second order differences between 3D warping and optical flow warping to distinguish the moving
object from parallax regions. We present results of the algorithm on data collected with an eight-camera system
mounted on a vehicle under multiple scenarios that include moving and pop-up targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Army Research Laboratory (ARL) has designed and fabricated a forward-looking, impulse-based, ultra-wideband
(UWB) imaging radar for detection of concealed targets. This system employs a physical array of 16 receive antennas to
provide the necessary aperture for sufficient cross-range resolution in the forward-looking geometry. Each antenna feeds
a base-band receiver/digitizer that integrates the data from a number of radar pulses before passing it on to the personal
computer (PC) based operator's console and display. The innovative ARL receiver design uses commercially available
integrated circuits to provide a low-cost, lightweight digitizing scheme with an effective sampling rate of approximately
8 GHz. The design is extensible to allow for growth in the number of channels used and improvements in integrated
circuit performance to eventually meet the expected unmanned ground vehicle combat pace. Down-range resolution is
provided by the bandwidth of the transmitted pulse which occupies 300-3000 MHz. Range coverage is designed to be 25
meters with an adjustable start point forward of the vehicle. Modeling studies have shown that a pair of transmitters
situated at the two ends of the receive array provides best performance in cross-range resolution. Radar data is
continuously collected so that a horizontal two-dimensional synthetic aperture is formed for 3-D image formation. This
allows focusing of the data to yield estimates of target height as well as position to tag potential obstacles as being
negative (e.g. holes, ditches) or positive (e.g. tree stumps). The forward motion also improves the cross range resolution
to targets as their aspect changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ARL is developing the autonomous capability to directly support the Army's future requirements to employ unmanned
systems. The purpose of this paper is to document and benchmark the current ARL Collaborative Technology Alliance
(CTA) capabilities in detecting, tracking and avoiding moving humans and vehicles from a moving unmanned vehicle.
For this experiment ARL and General Dynamics Robotic Systems (GDRS) conducted an experiment involving an ARL
eXperimental Unmanned Vehicle (XUV) operating in proximity to a number of stationary and moving human surrogates
(mannequins) and moving vehicles. In addition there were other objects along the XUV route of the experiment such as
barrels, fire hydrants, poles, cones, and other clutter.
The experiment examined the performance of seven algorithms using a series of sensor modalities to detect stationary
and moving objects. Three of the algorithms showed promise, detecting human surrogates and vehicles with
probabilities ranging from 0.64 to 0.85, while limiting probability of misclassification to 0.14 to 0.37. Moving
mannequins were detected with slightly higher probabilities than fixed mannequins. The distance from the ground truth
at the time of detection suggests that at a speed of 20 kph with a minimum distance to detection of 19.38 m, the vehicle
would have a minimum of 3.5 seconds to avoid a mannequin or vehicle if detected by one of these three algorithms.
Among mannequins and vehicles and, mannequins were more frequently detected than vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is essential to ensure the safety and reliability of in-service structures such as unmanned vehicles by detecting
structural cracking, corrosion, delamination, material degradation and other types of damage in time. Utilization of an
integrated sensor network system can enable automatic inspection of such damages ultimately. Using a built-in network
of actuators and sensors, Acellent is providing tools for advanced structural diagnostics. Acellent's integrated structural
health monitoring system consists of an actuator/sensor network, supporting signal generation and data acquisition
hardware, and data processing, visualization and analysis software.
This paper describes the various features of Acellent's latest SMART sensing system. The new system is USB-based
and is ultra-portable using the state-of-the-art technology, while delivering many functions such as system self-diagnosis,
sensor diagnosis, through-transmission mode and pulse-echo mode of operation and temperature
measurement. Performance of the new system was evaluated for assessment of damage in composite structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose
a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure
conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo
camera system. As a result, the Field-Of-View of the system is extended significantly to capture more
natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the
failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited
locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the
accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements
from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated
with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate
and robust than previously published techniques (1∼5% localization error) over long-distance navigation both
indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated
within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discussed is a novel method of manufacturing an Angularly Sensitive Micro-Sensor (ASMS). The process
employed utilizes excimer laser ablation to write out the microlens on the curved surface of the master lens. This
master lens element is manufactured with fused optical fibers, such that if the registration is maintained, the light
from each microlens goes via the fiber to a specific pixel in a focal plane array (FPA). Such a system allows for a
field of view greatly in excess of 180 degrees. If local imaging is required for specific tasks the fiber can send the
angularly localized image to a pixel set. Image fusing may then be required.
Infrared and ultraviolet versions can be manufactured. A more general application allows for a multi-spectral
sensor. After one ASMS is constructed, then an inverse mask (mould) can be created and the monolithic sphere,
retaining its registration, is covered in liquid plastic and placed into the mould and the exact replica is re-created.
The advantage is low cost and rapid manufacture of the ASMS.
The paper focuses on this sensor as a Task-Oriented Optical Processing (TOP) system; where the processing is
performed primarily by the optics leaving a greatly reduced requirement for an electronic processor. This is a
critical issue for micro, insect sized platforms where the weight budget is devoted to the energy and propulsive
systems. An important aspect of this approach is that the sensor samples amplitude and angular space rather than
amplitude and position space as conventional sensors currently do. This makes the ASMS processing paradigm
completely different from conventional image processing. For example using several fiber/pixel elements to
comprise a UV polarimeter allows for simple storage and processing of vector elements for simple navigation. The
home position may be treated as "Look up table" reference matrix (RM). That base table can be modified to account
for the passage of time (and hence change in solar position from the UV polarimeter, as appropriate). A second
"real time" travel matrix (TRM) is then created. Eventually, a target matrix (TAM) would also be created. Simply
driving changes in the TRM towards the RM would be used for navigating the return trip back to home base. When
the difference between the two matrices goes to a null matrix the platform would be home.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method to forecast terrain trafficability from visual appearance. During training, the system
identifies a set of image chips (or exemplars) that span the range of terrain appearance. Each chip is assigned a vector tag
of vehicle-terrain interaction characteristics that are obtained from simple performance models and on-board sensors, as
the vehicle traverses the terrain. The system uses the exemplars to segment images into regions, based on visual
similarity to the terrain patches observed during training, and assigns the appropriate vehicle-terrain interaction tag to
them. This methodology will therefore allow the online forecasting of vehicle performance on upcoming terrain.
Currently, the system uses a fuzzy c-means clustering algorithm for training. In this paper, we explore a number of
different features for characterizing the visual appearance of the terrain and measure their effect on the prediction of
vehicle performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are developing an ultra wideband (UWB) radar sensor payload for the man-portable iRobot PackBot UGV. Our goal
is to develop a sensor array that will allow the PackBot to navigate autonomously through foliage (such as tall grass)
while avoiding obstacles and building a map of the terrain. We plan to use UWB radars in conjunction with other
sensors such as LIDAR and vision. We propose an algorithm for using polarimetric (dual-polarization) radar arrays to
classify radar returns as either vertically-aligned foliage or solid objects based on their differential reflectivity, a function
of their aspect ratio. We have conducted preliminary experiments to measure the ability of UWB radars to detect solid
objects through foliage. Our initial results indicate that UWB radars are very effective at penetrating sparse foliage, but
less effective at penetrating dense foliage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a positioning system for walking persons, called "Personal Dead-reckoning" (PDR) system. The
PDR system does not require GPS, beacons, or landmarks. The system is therefore useful in GPS-denied environments,
such as inside buildings, tunnels, or dense forests. Potential users of the system are military and security personnel as
well as emergency responders.
The PDR system uses a small 6-DOF inertial measurement unit (IMU) attached to the user's boot. The IMU provides
rate-of-rotation and acceleration measurements that are used in real-time to estimate the location of the user relative
to a known starting point. In order to reduce the most significant errors of this IMU-based system−caused by the
bias drift of the accelerometers−we implemented a technique known as "Zero Velocity Update" (ZUPT). With the
ZUPT technique and related signal processing algorithms, typical errors of our system are about 2% of distance traveled.
This typical PDR system error is largely independent of the gait or speed of the user. When walking continuously for
several minutes, the error increases gradually beyond 2%. The PDR system works in both 2-dimensional (2-D) and 3-D
environments, although errors in Z-direction are usually larger than 2% of distance traveled.
Earlier versions of our system used an impractically large IMU. In the most recent version we implemented a much
smaller IMU. This paper discussed specific problems of this small IMU, our measures for eliminating these problems,
and our first experimental results with the small IMU under different conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important problem in unmanned air vehicle (UAV) and UAV-mounted sensor control is the target search
problem: locating target(s) in minimum time. Current methods solve the optimization of UAV routing control
and sensor management independently. While this decoupled approach makes the target search problem
computationally tractable, it is suboptimal.
In this paper, we explore the target search and classification problems by formulating and solving a joint UAV
routing and sensor control optimization problem. The routing problem is solved on a graph using receding horizon
optimal control. The graph is dynamically adjusted based on the target probability distribution function (PDF).
The objective function for the routing optimization is the solution of a sensor control optimization problem. An
optimal sensor schedule (in the sense of maximizing the viewed target probability mass) is constructed for each
candidate flight path in the routing control problem.
The PDF of the target state is represented with a particle filter and an "occupancy map" for any undiscovered
targets. The tradeoff between searching for undiscovered targets and locating tracks is handled automatically
and dynamically by the use of an appropriate objective function. In particular, the objective function is based
on the expected amount of target probability mass to be viewed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Department of Energy's Idaho National Laboratory (INL) has been researching autonomous unmanned
vehicle systems for over fifteen years. Areas of research have included unmanned ground and aerial vehicles used for
hazardous and remote operations as well as teamed together for advanced payloads and mission execution. Areas of
application include aerial particulate sampling, cooperative remote radiological sampling, and persistent surveillance
including real-time mosaic and geo-referenced imagery in addition to high-resolution still imagery. Both fixed-wing and
rotary airframes are used possessing capabilities spanning remote control to fully autonomous operation. Patented INL-developed
auto steering technology is taken advantage of to provide autonomous parallel path swathing with either
manned or unmanned ground vehicles. Aerial look-ahead imagery is utilized to provide a common operating picture for
the ground and air vehicles during cooperative missions. This paper will discuss the various robotic vehicles, including
sensor integration, used to achieve these missions and anticipated cost and labor savings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Autonomous Intelligent Systems Section at Defence R&D Canada - Suffield envisions autonomous systems
contributing to decisive operations in the urban battle space. In this vision, teams of unmanned ground, air, and
marine vehicles, and unattended ground sensors will gather and coordinate information, formulate plans, and
complete tasks. The mobility requirement for ground-based mobile systems operating in urban settings must
increase significantly if robotic technology is to augment human efforts in military relevant roles and environments.
In order to achieve its objective, the Autonomous Intelligent Systems Section is pursuing research that
explores the use of intelligent mobility algorithms designed to improve robot mobility. Intelligent mobility uses
sensing and perception, control, and learning algorithms to extract measured variables from the world, control
vehicle dynamics, and learn by experience. These algorithms seek to exploit available world representations of
the environment and the inherent dexterity of the robot to allow the vehicle to interact with its surroundings
and produce locomotion in complex terrain. However, a disconnect exists between the current state-of-the-art
in perception systems and the information required for novel platforms to interact with their environment to
improve mobility in complex terrain. The primary focus of the paper is to present the research tools, topics, and
plans to address this gap in perception and control research. This research will create effective intelligence to
improve the mobility of ground-based mobile systems operating in urban settings to assist the Canadian Forces
in their future urban operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned vehicle systems is an attractive technology for the military, but whose promises have remained
largely undelivered. There currently exist fielded remote controlled UGVs and high altitude
UAV whose benefits are based on standoff in low complexity environments with sufficiently low control
reaction time requirements to allow for teleoperation. While effective within there limited operational
niche such systems do not meet with the vision of future military UxV scenarios. Such scenarios envision
unmanned vehicles operating effectively in complex environments and situations with high levels of independence
and effective coordination with other machines and humans pursing high level, changing and
sometimes conflicting goals. While these aims are clearly ambitious they do provide necessary targets
and inspiration with hopes of fielding near term useful semi-autonomous unmanned systems. Autonomy
involves many fields of research including machine vision, artificial intelligence, control theory, machine
learning and distributed systems all of which are intertwined and have goals of creating more versatile
broadly applicable algorithms. Cohort is a major Applied Research Program (ARP) led by Defence R&D
Canada (DRDC) Suffield and its aim is to develop coordinated teams of unmanned vehicles (UxVs) for
urban environments. This paper will discuss the critical science being addressed by DRDC developing
semi-autonomous systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Institute of Standards and Technology (NIST), under an interagency agreement with the United States
Department of Transportation (DOT), is supporting development of objective test and measurement procedures for
vehicle-based warning systems intended to warn an inattentive driver of imminent rear-end, road-departure and lane-change
crash scenarios. The work includes development of track and on-road test procedures, and development of an
independent measurement system, which together provide data for evaluating warning system performance. This paper
will provide an overview of DOT's Integrated Vehicle-Based Safety System (IVBSS) program along with a review of
the approach for objectively testing and measuring warning system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A range image for micro UAV (unmanned air vehicle) collision avoidance is derived by processing a sequence of conventional
images from a single camera on board the UAV. The range image will warn of looming collisions immediately ahead
and also provide the 3-D situational awareness over a wide field of view needed for semi-autonomous or autonomous
operation of the UAV. This single-camera technique is potentially applicable for other robotic vehicles that may not be
large enough for two-camera stereo. The range image is generated by tracking the motion of scene detail along optic flow
lines. Performance is estimated in terms of the minimum and maximum ranges of scene detail that can be sensed as a function
of its position within the field of view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robots developed from the 60's to the present have been restricted to highly structured environments such as work cells
or automated guided vehicles, primarily to avoid harmful interactions with humans. Next generation robots must
function in unstructured environments. Such robots must be fault tolerant to sensor and manipulator failures, scalable in
number of agents, and adaptable to different robotic base platforms. The Central Arkansas Robotics Consortium has
developed a robot controller architecture, called Layered Mode Selection Logic (LMSL), which addresses all of these
concerns. The LMSL architecture is an implementation of a behavior based controller fused with a planner. The
architecture creates an abstraction layer for the robot sensors through a Fuzzy Sensor Fusion Network (FSFN), and it
creates an abstraction layer for the robot manipulators through a reactive layer. The LMSL architecture has been
implemented and tested on UALR's J5 robotics research platform. A FSFN combines acceleration and force signals for
collision detection. The output of the FSFN switches among low level behaviors to accomplish obstacle avoidance and
obstacle manipulation. Comparable results are achieved with all sensors functioning, with only the acceleration sensor
(force sensor faulted), and with only the force sensor (acceleration sensor faulted).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As technology and research advance to the era of cooperative robots, many autonomous robot team algorithms
have emerged. Shape formation is a common and critical task in many cooperative robot applications. While
theoretical studies of robot team formation have shown success, it is unclear whether such algorithms will perform
well in a real-world environment. This work examines the effect of collision avoidance schemes on an ideal circle
formation algorithm, but behaves similarly if robot-to-robot communications are in place. Our findings reveal
that robots with basic collision avoidance capabilities are still able to form into a circle, under most conditions.
Moreover, the robot sizes, sensing ranges, and other critical physical parameters are examined to determine their
effects on algorithm's performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a complete system for autonomous navigation of a mobile robot in urban environments. An urban
environment is defined as one having hard surface and comprising curbs, ramps, and obstacles. The robot is required to
traverse flat ground surfaces and ramps, but avoid curbs and obstacles. The system employs a 2-D laser rangefinder for
terrain mapping. In order to filter erroneous sensor data (mixed pixels and noises), an Extended Kalman Filter (EKF) is
used to segment the laser range data into straight-line segments and isolated points. The isolated points are then
compared with those points at their neighboring straight-line segments to detect discontinuity in received energy (called
reflectivity value in this work). The points exhibiting discontinuity of reflectivity are identified as erroneous data and
removed. A so-called "Polar Traversability Index" measure is proposed to evaluate terrain traversal property. A PTI
dictates the level of difficulty for a robot to move along the corresponding direction. It enables the robot to traverse
wheelchair ramps and avoid curbs when negotiating sidewalks in urban environment. The advantage of using PTI over
the conventional traversability index is that the robot's yaw angle is taken into account when computing the terrain
traversal property at the corresponding direction. This allows the robot to snake up a steep ramp that may be too steep
for the robot to climb if the conventional traversibility index were used. The efficacies of the PTI and the entire system
have been verified by simulation and experiments with a real robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multiple Airborne Sensor Targeting and Evaluation Rig (MASTER) is a high fidelity simulation environment in
which data fusion, tracking and sensor management algorithms developed within QinetiQ Ltd. can be demonstrated and
evaluated. In this paper we report an observer trajectory planning tool that adds considerable functionality to MASTER.
This planning tool can coordinate multiple sensor platforms in tracking highly manoeuvring targets. It does this by
applying instantaneous thrusts to each platform, the magnitude of which is chosen to gain maximum observability of
the target. We use an efficient search technique to determine the thrust that should be applied to each platform at each
time step, and the planning horizon can either be one-step (greedy) or two-step. The measure of performance used in
evaluating each potential sensor manoeuvre (thrust) is the posterior Cramer-Rao lower bound (PCRLB), which gives the
best possible (lowest mean square error) tracking performance. We exploit a recent novel approach to approximating the
PCRLB for manoeuvring target tracking (the "best-fitting Gaussian" (BFG) approach: Hernandez et al., 2005). A closed-form
expression gives the BFG approximation at each sampling time. Hence, the PCRLB can be approximated with a very
low computational overhead. As a result, the planning tool can be implemented as an aid to decision-making in real-time,
even in this time-critical airborne domain. The functionality of MASTER enables one to access the performance of the
planning tool in a range of sensor-target scenarios, enabling one to determine the minimal sensor requirement in order to
satisfy mission requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Experience deploying robots for security and inspection tasks shows that often the activity of "driving" the robot
interferes with the activity of observing the sensor data (often visual) collected by the robot. It has been suggested that
the supervised autonomy paradigm can improve system performance. In this approach, some aspects of the robot's
actions are automated, particularly motion control, freeing the operator to focus on the inspection task. In this paper we
describe the laboratory-level proof-of-concept development and implementation of a semi-autonomous mode for the
ODIS robot, whereby, under the direction and supervision of an operator, the robot can self-navigate in a limited
structured environment while sending back video images for operator inspection. Laboratory results show the feasibility
of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the
battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will
be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future
battlefield operational scenarios involving the use of automation, including the specification of existing and proposed
technologies, will provide significant insight into potential problem areas regarding soldier workload.
The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an
Army technology objective program to analyze and evaluate the effect of automated technologies and their associated
control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior
Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive
simulations of military scenarios with various deployments of interface technologies in order to evaluate operator
effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a
configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the
physical space limitations of the display device.
This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both
systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI
tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation
environment. The paper describes the background of each system and details of the system integration approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperated mobile robots are beginning to be used for a variety of tasks that require movement in close quarters in the
vicinity of moving and parked vehicles, buildings and other man-made structures, and the target object for inspection or
manipulation. The robots must be close enough to deploy short-range sensors and manipulators, and must be able to
maneuver without potentially damaging collisions. Teleoperation is fatiguing and stressful even without the requirement
for close positioning. In cooperation with the TARDEC Robotic Mobility Laboratory (TRML), we are investigating
approaches to reduce workload and improve performance through augmented teleoperation.
Human-robot interfaces for teleoperation commonly provide two degrees-of-freedom (DoF) motion control with visual
feedback from an on-board egocentric camera and no supplemental distance or orientation cueing. This paper reports on
the results of preliminary experiments to assess the effects on man-machine task performance of several options for
augmented teleoperation: (a) 3 DoF motion control (rotation and omni-directional translation) versus 2 DoF control
(rotation and forward/reverse motion), (b) on-board egocentric camera versus fixed-position overwatch camera versus
dual egocentric-and-overwatch cameras, and (c) presence or absence of distance and orientation visual cueing. We
examined three dimensions of performance: completion time, spatial accuracy, and workspace area. We investigated
effects on the expected completion time and on the variance in completion time. Spatial accuracy had three components:
orientation, aimpoint, and distance. We collected performance under different task conditions: (a) three position-and-orientation
tolerance or accuracy objectives, and (b) four travel distances between successive inspection points. We
collected data from three subjects. We analyzed the main effects and conditional interaction effects among the
teleoperation options and task conditions. We were able to draw some definitive conclusions regarding the relative
performance of design alternatives, and conditions under which their performance degraded. We made some
observations regarding operator behaviors, which suggested some potential augmented teleoperation enhancements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The military have a considerable amount of experience from using robots for mine clearing and bomb removal. As new
technology emerges it is necessary to investigate the possibly to expand robot use. This study has investigated an Army
company, specialized in urban operations, while fulfilling their tasks with the support of a PackBot Scout. The robot was
integrated and deployed as an ordinary component of the company and included modifying and retraining a number of
standard behaviors to include the robot. This paper reports on the following issues: evaluation of missions where the
platform can be deployed, what technical improvements are the most desired, and what are the new risks introduced by
use of robots? Information was gathered through observation, interviews, and a questionnaire.
The results indicate the robot to be useful for reconnaissance and mapping. The users also anticipated that the robot
could be used to decrease the risks of IEDs by either triggering or by neutralising them with a disruptor. The robot was
further considered to be useful for direct combat if armed, and for placing explosive loads against, for example, a door.
Autonomous rendering of maps, acquiring images, two-way audio, and improved sensing such as IR were considered
important improvements. The robot slowing down the pace of the unit was considered to be the main risk when used in
urban operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fusion of multiple behavior commands and sensor data into intelligent and cohesive robotic movement
has been the focus of robot research for many years. Sequencing low level behaviors to create high level
intelligence has also been researched extensively. Cohesive robotic movement is also dependent on other
factors, such as environment, user intent, and perception of the environment. In this paper, a method for
managing the complexity derived from the increase in sensors and perceptions is described. Our system
uses fuzzy logic and a state machine to fuse multiple behaviors into an optimal response based on the
robot's current task. The resulting fused behavior is filtered through fuzzy logic based obstacle avoidance
to create safe movement. The system also provides easy integration with any communications protocol,
plug-and-play devices, perceptions, and behaviors. Most behaviors and the obstacle avoidance parameters
are easily changed through configuration files. Combined with previous work in the area of navigation and
localization a very robust autonomy suite is created.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned systems simultaneously reduce risk and magnify the impact of soldier-operators. For example, in
Afghanistan UAVs routinely provide overwatch to manned units while UGVs support IED identification and
disposal roles. Expanding these roles requires greater autonomy with a coherent unmanned "system of systems"
approach that leverages one platform's strengths against the weakness of another. Specific collaborative
unmanned systems such as shared sensing, communication relay, and distributed computing to achieve greater
autonomy are often presented as possible solutions. By surveying currently deployed systems, this paper shows
that the spectrum of air and ground systems provide an important mixture of range, speed, payload, and endurance
with significant implications on mission structure. Rather than proposing UxV teams collaborating
towards specific autonomous capabilities, this paper proposes that basic physical and environmental constraints
will drive tactics towards a layered, unmanned battlespace that provides force protection and reconnaissance in
depth to a manned core.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information overload and cluttered user interfaces can lead to decreased situational awareness and lowered performance
of human operators. Irrelevant data increases searching times for tasks requiring the identification of threats, causing
delayed decisions. Cognitive burden on the user increases as displays become more cluttered, which results in increased
operator stress leading to poor decision-making ability. To address this issue, we have designed an intelligent agentbased
system for the automatic de-cluttering of a representative net-centric interface designed for controlling multiple
unmanned aerial vehicles (UAVs) by a single operator. Our concept is called ARID, for Agent-based Reduction of
Information Density. The ARID hypothesis is that intelligent agents can improve operator performance by deemphasizing
those aspects of a display that can be inferred as less-important to the mission goals.
ARID agents receive information about the world via data feeds provided by various net-centric sources. Each agent has
an understanding of the user interface symbols that are used to represent various entities, terrain features, and zones. The
agent also is provided with a mission goal which is used for inferring the relevance of a given symbol to the success of
the mission goal. First level facts, such as spatial relationships, are calculated by supporting agents and assigned a BDU
(belief/disbelief/uncertainty) value. A dynamic set of rules provides an inference mechanism by which an agent can infer
new facts from the given assertions. We have developed a Subjective Logic-based Evidential Reasoning Network that
explicitly deals with belief and uncertainty in the knowledge base, and is used to derive a relevancy belief for every UI
symbol in the map display. Subjective Logic is used to combine values when different sources provide different results
for the same symbol. User Interface agents apply the results of the relevancy beliefs and transform the display to
minimize the apparent clutter caused by less relevant elements. Two transformations, transparency and grouping, are
used in the current implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Instant Scene Modeler (iSM) is a vision system for generating calibrated photo-realistic 3D models of unknown
environments quickly using stereo image sequences. Equipped with iSM, Unmanned Ground Vehicles (UGVs) can
capture stereo images and create 3D models to be sent back to the base station, while they explore unknown
environments. Rapid access to 3D models will increase the operator situational awareness and allow better mission
planning and execution, as the models can be visualized from different views and used for relative measurements.
Current military operations of UGVs in urban warfare threats involve the operator hand-sketching the environment from
live video feed. iSM eliminates the need for an additional operator as the 3D model is generated automatically. The
photo-realism of the models enhances the situational awareness of the mission and the models can also be used for
change detection. iSM has been tested on our autonomous vehicle to create photo-realistic 3D models while the rover
traverses in unknown environments.
Moreover, a proof-of-concept iSM payload has been mounted on an iRobot PackBot with Wayfarer technology, which is
equipped with autonomous urban reconnaissance capabilities. The Wayfarer PackBot UGV uses wheel odometry for
localization and builds 2D occupancy grid maps from a laser sensor. While the UGV is following walls and avoiding
obstacles, iSM captures and processes images to create photo-realistic 3D models. Experimental results show that iSM
can complement Wayfarer PackBot's autonomous navigation in two ways. The photo-realistic 3D models provide better
situational awareness than 2D grid maps. Moreover, iSM also recovers the camera motion, also known as the visual
odometry. As wheel odometry error grows over time, this can help improve the wheel odometry for better localization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of the Crew-Automated and integration Testbed (CAT) Advanced Technology Objective (ATO), the US Army
Tank-automotive and Armaments Research, Development, and Engineering Center (TARDEC) developed crew stations
that provided soldiers the ability to control both manned and unmanned vehicles. The crew stations were designed to
optimize soldier workload and provide the ability to conduct mission planning, route planning, reconnaissance,
surveillance, and target acquisition (RSTA), and fire control capabilities. The crew station software is fully
configurable, portable (between crew stations), and interoperable with one another. However, the software architecture
was optimized for the specific computing platform utilized by each crew station and user interfaces were hard coded.
Current CAT crew station capabilities are required to execute on other crew station configurations as well as handheld
devices to meet the needs of expanded soldier roles, including dismounted infantry. TARDEC is currently exploring
ways to develop a scalable software architecture that is able to adapt to the physical characteristics of differing
computing platforms and devices. In addition, based upon a soldier's role, the software must be able to adapt and
optimize the displays based upon individual soldier needs. And finally, the software must be capable of applying a
unique style to the presentation of information to the soldier. Future programs require more robust software
architectures that take these requirements into account. This paper will describe how scalable software architectures can
be designed to address each of these unique requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chaos is a 2-man-portable tele-operated vehicle designed for crossing rugged
terrain. Chaos is capable of crossing large piles of cinder blocks, picnic tables, and
steep hills of loose soil. These feats are accomplished through use of 4 independent
track arms, each of which can be articulated at an arbitrary angle and driven
at an arbitrary speed. These make the vehicle extremely capable but also demand
significant skill on the part of the user. It is therefore desirable to automate the arm
angles and track speeds to ease operator burden. This paper reports on preliminary
efforts to implement 2 intelligent behaviors along these lines. The first involves
heading stabilization: A gyroscope is used to sense yaw and yaw rate, and these
are compared with the operators commands. Deviations are then used to automatically
correct the heading. This is useful when Chaos is climbing stairs or other
bumpy terrain, which can cause the vehicle to veer off in unwanted directions. We
call the other behavior anti-rollover. In this case, the output of a gyroscope is monitored
to detect if roll or pitch thresholds are exceeded. When they are, the track
arms are automatically positioned to stabilize the vehicle and keep it right side up.
Experimental results for both algorithms are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Serpentine robots are slender, multi-segmented vehicles designed to provide greater mobility than conventional
wheeled or tracked robots. Serpentine robots are thus ideally suited for urban search and rescue, military intelligence
gathering, and for surveillance and inspection tasks in hazardous and hard-to-reach environments. One such serpentine
robot, developed at the University of Michigan, is the "OmniTread OT-4." The OT-4 comprises seven segments, which
are linked to each other by 2-degree-of-freedom joints. The OT-4 can climb over obstacles that are much higher than the
robot itself, propel itself inside pipes of different diameters, and traverse even the most difficult terrain, such as rocks or
the rubble of a collapsed structure.
The foremost and unique design characteristic of the OT-4 is the use of pneumatic bellows to actuate the joints.
These bellows allow simultaneous control of position and stiffness for each joint. Controllable stiffness is of crucial importance
in serpentine robots, which require stiff joints to cross gaps and compliant joints to conform to rough terrain for
effective propulsion. Another unique feature of the OmniTread design is the maximal coverage of all four sides with
driven tracks. This design makes the robot indifferent to roll-overs, which are happen frequently when the slender bodies
of serpentine robots travel over rugged terrain.
This paper describes the OmniTread concept as well as its latest technical features, and an extensive Experiment Results
Section documents the abilities of the OT-4.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of alternative energy technology for vehicle propulsion and auxiliary power is becoming more important. Work
is being performed at Michigan Technological University's Keweenaw Research Center on an Army Research
Laboratory cooperative agreement to develop two unmanned ground vehicles for military applications. A wide range of
alternative energy technologies were investigated, and hydrogen-powered proton exchange membrane fuel cells were
identified as the most appropriate alternative energy source. This is due to some development and commercialization
which makes the technology "drop-in plug-in" for immediate use. We present research work on a small unmanned
ground vehicle demonstration platform where the fuel cell is the only power source. We also present research work on
the integration of a fuel cell onto a large existing platform. The dual-power capability of this vehicle can provide a
modest level of propulsion in "engine-off mode" and may also be used to power directed energy devices which have
applications in countermine and similar threat technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Picture someone walking from left to right. During one step (intra-step) we treat them as
a simple pendulum. This model is called the rimless wheel in the literature. We analyze
this model intra-step using dynamic programming to find the optimum energy profile
based on time and energy cost. We then analyze the problem inter-step for the ideal
stepsize based on time cost alone, i.e., without foot collision (energy) losses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper looks at the mobility issues of Unmanned Ground Vehicles (UGV). The different mobility types and novel
developments used on UGV's are reviewed and reasons why they have been chosen, as well as the need for incorporated
systems in modern commercial vehicles which are required to control vehicular locomotion in order to make driving
over certain ground types successful and efficient. Our current research into using innovative solutions to improve UGV
mobility by creating a dynamic reconfigurable system that can adapt to situational changes is highlighted. This concept
will give the vehicle the ability to be able to cope with potentially any task in any condition whether known or unknown,
ultimately creating a system that can take more risks and have fewer limitations on its uses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Self-organizing Collaborative ISR Robotic Teams I: Joint Session with Conference 6578
We developed and demonstrated a UAV package that works in conjunction with the PackBot UGV to allow medium
range missions. Both the UAV and UGV are man portable, and the combined system can be used from unimproved
airfields such as soccer pitches. The UAV is capable of up to 75lbs of payload, while weighing less than 30lbs. This
document describes the initial proof of concept prototype, the associated ground and flight tests, and areas for further
development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned Air Vehicles (UAVs) are expected to dramatically alter the way future battles are fought. Autonomous
collaborative operation of teams of UAVs is a key enabler for efficient and effective deployment of large numbers of
UAVs under the U. S. Army's vision for Force Transformation. Autonomous Collaborative Mission Systems (ACMS)
is an extensible architecture and collaborative behavior planning approach to achieve multi-UAV autonomous
collaborative capability. Under this architecture, a rich set of autonomous collaborative behaviors can be developed to
accomplish a wide range of missions. In this article, we present our simulation results in applying various autonomous
collaborative behaviors developed in the ACMS to an integrated convoy protection scenario using a heterogeneous team
of UAVs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned aerial vehicles (UAV) can be used for versatile surveillance and reconnaissance missions. If a UAV is
capable of flying automatically on a predefined path the range of possible applications is widened significantly.
This paper addresses the development of the integrated GPS/INS/MAG navigation system and a waypoint
navigator for a small vertical take-off and landing (VTOL) unmanned four-rotor helicopter with a take-off
weight below 1 kg. The core of the navigation system consists of low cost inertial sensors which are continuously
aided with GPS, magnetometer compass, and a barometric height information. Due to the fact, that the yaw
angle becomes unobservable during hovering flight, the integration with a magnetic compass is mandatory.
This integration must be robust with respect to errors caused by the terrestrial magnetic field deviation and
interferences from surrounding electronic devices as well as ferrite metals. The described integration concept
with a Kalman filter overcomes the problem that erroneous magnetic measurements yield to an attitude error
in the roll and pitch axis. The algorithm provides long-term stable navigation information even during GPS
outages which is mandatory for the flight control of the UAV.
In the second part of the paper the guidance algorithms are discussed in detail. These algorithms allow the
UAV to operate in a semi-autonomous mode position hold as well an complete autonomous waypoint mode.
In the position hold mode the helicopter maintains its position regardless of wind disturbances which ease the
pilot job during hold-and-stare missions. The autonomous waypoint navigator enable the flight outside the range
of vision and beyond the range of the radio link. Flight test results of the implemented modes of operation are
shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Self-organizing Collaborative ISR Robotic Teams II: Joint Session with Conference 6578
Reconnaissance collection in unknown or hostile environments can be a dangerous and life threatening task. To reduce
this risk, the Unmanned Systems Group at Virginia Tech has produced a fully autonomous reconnaissance system able
to provide live video reconnaissance from outside and inside unknown structures. This system consists of an
autonomous helicopter which launches a small reconnaissance pod inside a building and an operator control unit (OCU)
on a ground station. The helicopter is a modified Bergen Industrial Twin using a Rotomotion flight controller and can
fly missions of up to one half hour. The mission planning OCU can control the helicopter remotely through
teleoperation or fully autonomously by GPS waypoints. A forward facing camera and template matching aid in
navigation by identifying the target building. Once the target structure is identified, vision algorithms will center the
UAS adjacent to open windows or doorways. Tunable parameters in the vision algorithm account for varying launch
distances and opening sizes. Launch of the reconnaissance pod may be initiated remotely through a human in the loop or
autonomously. Compressed air propels the half pound stationary pod or the larger mobile pod into the open portals.
Once inside the building, the reconnaissance pod will then transmit live video back to the helicopter. The helicopter acts
as a repeater node for increased video range and simplification of communication back to the ground station.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional control surfaces have been used in most carbon fiber composite, membrane-wing autonomous
micro air vehicles (MAV). In some cases, vehicle morphing is achieved using servo actuators to articulate vehicle
kinematic joints, or to deform crucial wing / tail surfaces. However, articulated lifting surfaces and articulated
wing sections are difficult to instrument and fabricate in a repeatable fashion. Assembly is complex and time
consuming. The goal of this paper is to establish the feasibility of morphing wings on autonomous MAVs that
are actuated via active materials. Active actuation is achieved via a type of piezoceramic composite called Macro
Fiber Composite (MFC). This paper investigates the structural dynamics of morphing wings on MAVs that are
actuated via active composites. This paper continues the work presented in1 by considering structural dynamic
characteristics of the morphing vehicle determined through Scanning Laser Doppler Vibrometry (SLDV).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present our initial findings demonstrating a cost-effective approach to Aided Target Recognition (ATR)
employing a swarm of inexpensive Unmanned Aerial Vehicles (UAVs). We call our approach Distributed ATR (DATR).
Our paper describes the utility of DATR for autonomous UAV operations, provides an overview of our methods, and the
results of our initial simulation-based implementation and feasibility study. Our technology is aimed towards small and
micro UAVs where platform restrictions allow only a modest quality camera and limited on-board computational
capabilities. It is understood that an inexpensive sensor coupled with limited processing capability would be challenged
in deriving a high probability of detection (Pd) while maintaining a low probability of false alarms (Pfa). Our hypothesis
is that an evidential reasoning approach to fusing the observations of multiple UAVs observing approximately the same
scene can raise the Pd and lower the Pfa sufficiently in order to provide a cost-effective ATR capability. This capability
can lead to practical implementations of autonomous, coordinated, multi-UAV operations.
In our system, the live video feed from a UAV is processed by a lightweight real-time ATR algorithm. This algorithm
provides a set of possible classifications for each detected object over a possibility space defined by a set of exemplars.
The classifications for each frame within a short observation interval (a few seconds) are used to generate a belief
statement. Our system considers how many frames in the observation interval support each potential classification. A
definable function transforms the observational data into a belief value. The belief value, or opinion, represents the
UAV's belief that an object of the particular class exists in the area covered during the observation interval. The opinion
is submitted as evidence in an evidential reasoning system. Opinions from observations over the same spatial area will
have similar index values in the evidence cache. The evidential reasoning system combines observations of similar
spatial indexes, discounting older observations based upon a parameterized information aging function. We employ
Subjective Logic operations in the discounting and combination of opinions. The result is the consensus opinion from all
observations that an object of a given class exists in a given region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focusses on the automated detection and tracking of moving objects in a camera sequence, that
is provided by a small, electrically powered four-rotor helicopter in a hover-and-stare scenario. Two different
algorithms for identifying independently moving areas are investigated and compared. The first approach bases
on the previous compensation of the camera movement by estimation of homographies. Moving regions are
extracted by robust background subtraction. The second approach bases on a dense optical flow field and needs
no stabilization: Single points are identified that move not consistently with the background plane. These points
are merged into objects by a cluster analysis algorithm. Furthermore, a strategy for tracking these objects over
time is described including a Kalman filter. Due to several reasons, not every extracted area corresponds to
an independently moving object and a heuristic rule set is used to sort artifacts out. Experimental results on
in-flight images are presented and the performances of the developed algorithms are compared. Finally, first
steps towards a geographic location of the tracked objects are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An intelligent swarm-based guidance and path planning algorithm for the Unmanned Arial Vehicles (UAV) provides
the ability to efficiently carry out grid surveillance, taking into account specific UAV constraints such as maximum
speed, maximum flight time and battery re-charging intervals to allow for continuous surveillance. The swarm-based
flight planning is based on enhancements of distributed computing concepts that have been developed for NASA's
launch danger zone protection. The algorithm is a modified version of an ant colony optimization theory describing ant
food foraging. Ants initially follow random paths from the nest, but if food is found, the ant deposits a pheromone
(modifying the local environment), which influences other ants to travel the same path. Once the food source is
exhausted, the pheromone decays naturally, which causes the trail to disappear. When an ant is on an established trail, it
may at any time decide to follow a new random path, allowing for new exploration. Using these concepts, in our system
for UAV, we use two units, the Rendezvous unit and the Patrol unit. The Rendezvous units will act as pheromone
deposit sites keeping a record of trails of interest (extra pheromone that decays over time), and obstacles (no
pheromone). The search area is divided into a grid of areas. Each area unit is assigned a pheromone weight. The patrol
unit picks an area unit based on a probabilistic formula consisting of parameters like the relative weight of trail
intensity, area visibility to the unit, the distance of the patrol unit from the area, and the pheromone decay factor.
Simulation of a UAV surveillance system based on the above algorithm showed that it has the ability to perform
independently and reliably without human intervention, and the emergent nature of the algorithm has the ability to
incorporate important aspects of unmanned surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the problem of fault tolerant cooperative control for UAV rendezvous problem in which
multiple UAVs are required to arrive at their designated target despite presence of a fault in the thruster of any
UAV. An integrated hierarchical scheme is proposed and developed that consists of a cooperative rendezvous
planning algorithm at the team level and a nonlinear fault detection and isolation (FDI) subsystem at individual
UAV's actuator/sensor level. Furthermore, a rendezvous re-planning strategy is developed that interfaces the
rendezvous planning algorithm with the low-level FDI. A nonlinear geometric approach is used for the FDI
subsystem that can detect and isolate faults in various UAV actuators including thrusters and control surfaces.
The developed scheme is implemented for a rendezvous scenario with three Aerosonde UAVs, a single target,
and presence of a priori known threats. Simulation results reveal the effectiveness of our proposed scheme
in fulfilling the rendezvous mission objective that is specified as a successful intercept of Aerosondes at their
designated target, despite the presence of severe loss of effectiveness in Aerosondes engine thrusters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum
efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long
endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to
automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system
was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous
airborne refueling operation.
Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping
position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the
GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable
using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the
aircraft, and is insufficient in practical operation to achieve a successful and safe docking.
A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to
visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives
significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by
subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate
estimate.
This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera
viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe work in performance standards for urban search and rescue (USAR) robots, begun in 2004 by
the Department of Homeland Security. This program is being coordinated by the National Institute of Standards and
Technology and will result in consensus standards developed through ASTM International, under the Operational
Equipment Subcommittee of their Homeland Security Committee. A comprehensive approach to performance
requirements and standards development is being used in this project. Formal test methods designed by several
working groups in the standards task group are validated by the stakeholders. These tests are complemented by regular
exercises in which responders and robot manufacturers work together to apply robots within realistic training scenarios.
This paper recaps the most recent exercise, held at the Federal Emergency Management Agency (FEMA) Maryland Task
Force 1 training facility, at which over twenty different robots were operated by responders from various FEMA Task
Forces. The exercise included candidate standard test methods being developed for requirements in the areas of
communications, mobility, sensors, and human-system interaction for USAR robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research efforts in Urban Search And Rescue (USAR) robotics have grown substantially in recent years. A
virtual USAR robotic competition was established in 2006 under the RoboCup umbrella to foster collaboration
amongst institutions and to provide benchmark test environments for system evaluation. In this paper we
describe the physics based software simulation framework that is used in this competition and the rules and
performance metrics used to determine the league's winner. The framework allows for the realistic modeling of
robots, sensors, and actuators, as well as complex, unstructured, dynamic environments. Multiple heterogeneous
agents can be concurrently placed in the simulation environment thus allowing for team or group evaluations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Autonomy Levels for Unmanned Systems (ALFUS) workshop series was convened to address the
autonomous nature of unmanned, robotic systems, or unmanned systems (UMS). Practitioners have
different perceptions or different expectations for these systems. The requirements on human interactions,
the types of tasks, the teaming of the UMSs and the humans, and the operating environment are just a few
of the issues that need to be clarified. Also needed is a set of definitions and a model with which the
autonomous capability of the UMS can be described. This paper reports the current results and status of the
ALFUS framework, which practitioners can apply to analyze the autonomy requirements and to evaluate
the performance of their robotic programs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PRIDE is a hierarchical multiresolutional framework for moving object prediction that incorporates multiple
prediction algorithms into a single, unifying framework. PRIDE is based upon the 4D/RCS (Real-time Control
System) reference model architecture and provides information to planners at the level of granularity that
is appropriate for their planning horizon. This framework supports the prediction of the future location of
moving objects at various levels of resolution, thus providing prediction information at the frequency and level of
abstraction necessary for planners at different levels within the hierarchy. To date, two prediction approaches have
been applied to this framework. In this paper, we provide an overview of the PRIDE (Prediction in Dynamic
Environments) framework and describe the approach that has been used to model different aggressivities of
drivers. We then explore different aggressivity models to determine their impact on the location predictions
that are provided through the PRIDE framework. We also describe recent efforts to implement PRIDE in
USARSim, which provides high-fidelity simulation of robots and environments based on the Unreal Tournament
game engine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Successful deployment of Unmanned Vehicle Systems (UVS) in military operations has increased their popularity
and utility. The ability to sustain reliable mobile ad hoc formations dramatically enhances the usefulness and performance of
UVS. Formation movement increases the amount of ground coverage in less time, decreases fuel consumption of the
individual nodes, and provides an avenue for mission expansion through cooperative maneuvers such as refueling.
In this paper, we study the wireless communication demands that arise from formation and maintenance of UVS within the
context of a mobile ad hoc network (MANET). A MANET in formation is typically characterized by tradeoffs between
network congestion and the ability to maintain useable communication bandwidth. Maintenance of UVS formations requires
each node in the network to be peer-aware, which places a heavy demand on inner node communication.
In order to mitigate the inner node network congestion, we introduce a time-slotted communication protocol. The protocol
assigns time-slots and allows the designated nodes to communicate directly with other peer-nodes. This approach has been
introduced within the context of the Time-Slotted Aloha protocol for station-to-station communication. The approach taken
here is to embed the time-slotted reservation protocol into a standard on-demand routing protocol to also address the need to
reactively and proactively respond to formation maintenance.
The time-slotted on-demand routing protocol is shown to eliminate collisions due to route determination and, therefore,
enhance quality of service as well as ensure necessary support for formation movement. A worst-case scenario is described
and simulations performed to comparatively demonstrate the advantages of the new protocol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NIST Construction Metrology and Automation Group (CMAG), in cooperation with the NIST Intelligent Systems
Division (ISD), is developing performance metrics and standard tests for the evaluation of 3D imaging systems used in
autonomous mobility applications. This work supports the broader effort to develop open, consensus-based
performance evaluation standards for a wide range of 3D imaging systems and applications through the ASTM E57
Committee on 3D Imaging Systems. This report presents initial efforts to characterize the range performance of a 3D
imaging sensor that will be used in a performance measurement system for crash prevention and safety systems. Factors
examined include range, target reflectance, target angle of incidence, and sensor azimuth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The capacity to predict motion adequately over the time scale of a few seconds is fundamental to autonomous mobility.
Model predictive optimal control is a general formalism within which most historical approaches can be cast as special
cases. Applications continue to grow in ambition to seek higher precision of motion and/or higher vehicle speeds.
Predictions must therefore improve in fidelity and speed simultaneously.
We favor an approach to precision motion control that we call parametric optimal control. It formulates the optimal
control problem as one of nonlinear programming - optimizing over a space of parameterized controls encoding all
feasible motions. It enables efficient inversion of the solutions to the equations of motion for a ground vehicle. Such an
inversion enables a computation of precisely the command signals necessary to drive the vehicle to goal position,
heading, and curvature while following the contours of the terrain under arbitrary wheel terrain interactions.
Dynamics inversion is so fundamental that many other mobility behaviors can be constructed from it. Fielded
applications include pallet acquisition controls for factory AGVs, high speed adaptive path following for military UGVs,
compensation for wheel slip on the Mars Exploration Rovers and full configuration space planning in dense obstacle
fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the design of a robot that can traverse land, water, as well as quicksand-like mud. The robot is low
cost and modular allowing the replacement of a variety of arms suitable for many of the tasks associated with
astrobiological exploration. An astrobiologist on a field study will spend most of the time walking around and exploring
the site looking for areas of interest which will be tested in situ or sampled for testing offsite. For a robot replicating
these tasks, it must be able to locomote in that terrain, sense the interesting features (or provide sensing for
teleoperation), and do a variety of manipulation tasks once an area of interest is reached. The configurations for this
robot include 10's of modules that can achieve astrobiological tasks such as amphibious locomotion, digging, core
sampling, probing, liquid sampling and exploration. This paper also presents results from the first experiments of this
platform at Lake Tyrrell, a salt lake in Australia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
4D/RCS is a hierarchical architecture designed for the control of intelligent systems. One of the main areas
that 4D/RCS has been applied to recently is the control of autonomous vehicles. To accomplish this, a hierarchical
decomposition of on-road driving activities has been performed which has resulted in implementation
of 4D/RCS tailored towards this application. This implementation has seven layers and ranges from a journey
manager which determines the order of the places you wish to drive to, through a destination manager which
provides turn-by-turn directions on how to get to a destination, through a route segment, drive behavior, elemental
maneuver, goal path trajectory, and then finally to servo controllers.
In this paper, we show, within the 4D/RCS architecture, how knowledge-driven top-down symbolic representations
combined with low-level bottom-up tasks can synergistically provide valuable information for on-road
driving better than what is possible with either of them alone. We demonstrate these ideas using field data
obtained from an Unmanned Ground Vehicle (UGV) traversing urban on-road environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we have presented methods that significantly improve the robot awareness of its terrain traversability conditions.
The terrain traversability awareness is achieved by association of terrain image appearances from different poses and fusion of
extracted information from multimodality imaging and range sensor data for localization and clustering environment landmarks.
Initially, we describe methods for extraction of salient features of the terrain for the purpose of landmarks registration from two
or more images taken from different via points along the trajectory path of the robot. The method of image registration is applied
as a means of overlaying (two or more) of the same terrain scene at different viewpoints. The registration geometrically aligns
salient landmarks of two images (the reference and sensed images). A Similarity matching techniques is proposed for matching
the terrain salient landmarks. Secondly, we present three terrain classifier models based on rule-based, supervised neural
network, and fuzzy logic for classification of terrain condition under uncertainty and mapping the robot's terrain perception to
apt traversability measures. This paper addresses the technical challenges and navigational skill requirements of mobile robots
for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different
methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three
competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain
environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a
fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions
and classifies terrain condition exclusively within each sub-terrain region based on terrain spatial and textural cues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensors commonly mounted on small unmanned ground vehicles (UGVs) include visible light and thermal cameras,
scanning LIDAR, and ranging sonar. Sensor data from these sensors is vital to emerging autonomous robotic behaviors.
However, sensor data from any given sensor can become noisy or erroneous under a range of conditions, reducing the
reliability of autonomous operations. We seek to increase this reliability through data fusion. Data fusion includes
characterizing the strengths and weaknesses of each sensor modality and combining their data in a way such that the
result of the data fusion provides more accurate data than any single sensor. We describe data fusion efforts applied to
two autonomous behaviors: leader-follower and human presence detection. The behaviors are implemented and tested
in a variety of realistic conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The modern combat theater is being populated by an increasing number of unmanned platforms that are operated in a
wide variety of missions. These platforms, like manned platforms in joint operations, have to operate in concert as an
integrated force. The operation of large numbers of unmanned platforms in a specific mission requires the use of
autonomous platforms which can operate without a tight operator control. Such a paradigm enables a single operator (or
a small group of operators) to control a large force of unmanned platforms conducting a mission similarly to the HQ
command-support that a manned force will be receiving. The need to increase the autonomous capabilities of unmanned
platforms operating in an integrated force promotes the development of Semi Automated Forces (SAF) and Fully Automated Forces (FAF).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned vehicles (UxV) operate in numerous environments, with air, ground and marine representing the
majority of the implementations. All unmanned vehicles, when traversing unknown space, have similar requirements.
They must sense their environment, create a world representation, and, then plan a path that safely
avoids obstacles and hazards. Traditionally, each unmanned vehicle class used environment specific assumptions
to create a unique world representation that was tailored to it operating environment. Thus, an unmanned aerial
vehicle (UAV) used the simplest possible world representation, where all space above the ground plane was free
of obstacles. Conversely, an unmanned ground vehicle (UGV) required a world representation that was suitable
to its complex and unstructured environment.
Such a clear cut differentiation between UAV and UGV environments is no longer valid as UAVs have migrated
down to elevations where terrestrial structures are located. Thus, the operating environment for a low flying
UAV contains similarities to the environments experienced by UGVs. As a result, the world representation
techniques and algorithms developed for UGVs are now applicable to UAVs, since low flying UAVs must sense
and represent its world in order to avoid obstacles.
Defence R&D Canada (DRDC) conducts research and development in both the UGV and UAV fields. Researchers
have developed a platform neutral world representation, based upon a uniform 21/2-D elevation grid,
that is applicable to many UxV classes, including aerial and ground vehicles. This paper describes DRDC's
generic world representation, known as the Global Terrain map, and provides an example of unmanned ground
vehicle implementation, along with details of it applicability to aerial vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The autonomous operations of intelligent unmanned aerial and space access vehicles demand fast online trajectory
computations, which rely heavily upon precise and expedited computation of aerodynamic coefficients. Traditional
methods use tabular data and linear interpolations, which are slow and, even worse, cannot produce smooth aerodynamic
functions that are highly demanded for trajectory computation. In this paper, we introduce neural network and PiecewiseSmooth Function based approaches to approximate these coefficients. Although in the past, neural networks have been
applied to aerodynamic coefficient modeling, they have not been considered for the purpose of trajectory design, which
generate large amounts of data during the flight envelope. In this paper, we present an efficient approach to reduce the
overwhelming amount of data requirements so that the training and testing of the proposed solutions are more
manageable and feasible. The preliminary testing results on the six aerodynamic coefficients show that the pitching
moment coefficient Cm and the axial force coefficient Ca are the most challenging to approximate, while the other four
coefficients are easily approximated. In this paper we have focused on improving approximation models for Cm with
promising results. In the future, we will continue our research on developing models for approximating Ca.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel capability for predicting and diagnosing current unmanned ground vehicle (UGV) system
health and status. Prognostication is the results of a multi-step process consisting of successful novelty detection, fault
detection, fault diagnosis, and failure prognosis. UGV mobility prediction requires the fusion of both external and
internal situational awareness, resulting in a course of action that can be executed by the UGV and confirmed by its own
sensors. Our algorithms are analytical and enable both prediction and diagnostics to be performed in real time and within
the limited processor speed and memory constraints of the UGV. This paper summarizes these algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces component based open middleware architecture implemented by ADD(Agency for Defense
Development) to accommodate new technology evolution of unmanned autonomous system. The proposed open system
architecture can be considered as a standard interface which defines the messages and operations between software
components on application layer level, and its purpose is to ensure the portability of future technology onto multi-platforms
as well as the inter-operability domains. In this architecture, the domain is defined as the space where several
different robots are operated, and each robot is defined as a subsystem within the domain. Each subsystem, i.e., robot, is
composed of several nodes, and then each node is composed of various components including node manager and
communicator. The implemented middleware uses reference architecture from JAUS (Joint Architecture for Unmanned
System) as a guidance. Among the key achievements of this research is the development of general node manager which
makes it possible to easily accommodate a new interface or the new core technology developed on the application layer
by providing a platform-independent communication interface between each subsystem and the components. This paper
introduces reference architecture and middleware applied in XAV (eXperimental Autonomous Vehicle) developed in
ADD. In addition, the performance of autonomous navigation and system design characteristics are briefly introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews hardware and software solutions that allow for rapid prototyping of new or modified embedded
avionics sensor designs, mission payloads and functional sub assemblies. We define reconfigurable computing in the
context of being able to place various PMC modules depending upon mission scenarios onto a base SBC (Single Board
Computer). This SBC could be either a distributed or shared memory architecture concept and have either two or four
PPC7447 A/7448 processor clusters. In certain scenarios, various combinations of boards could be combined in order to
provide a heterogeneous computing environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The digital, modular architecture of the iRobot PackBot EOD has been exploited to integrate an explosives vapor
detector. This expands the usefulness of the UGV from its core EOD role to checkpoint vehicle inspections and facility
clearing. From initial tests to deployment and training in Iraq and subsequent user feedback, we present the trials and
tribulations of this effort from the perspective of the engineers that traveled to Baghdad.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An interesting problem in the control of mobile robots is the steering. In this paper a mobile robot with front-wheel
steering is treated. A third-order kinematic model is developed. The problem of optimally steering the robot from an
initial position and heading to a final position and heading is addressed. The performance measure is taken to be elapsed
time. Assuming a fixed speed, this corresponds to a path of minimum distance. It is found that the trajectory consists of
segments of maximum-curvature turns and segments of straight lines. The straight-line segments are singular arcs. The
problem is shown to simplify when final heading is free. Four examples are solved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.