This paper describes the field-oriented philosophy of the Institute for Safety Security Rescue Technology (iSSRT) and summarizes the activities and lessons learned during calendar year 2005 of its two centers: the Center for Robot-Assisted Search and Rescue and the NSF Safety Security Rescue industry/university cooperative research center. In 2005, iSSRT participated in four responses (La Conchita, CA, Mudslides, Hurricane Dennis, Hurricane Katrina, Hurricane Wilma) and conducted three field experiments (NJTF-1, Camp Hurricane, Richmond, MO). The lessons learned covered mobility, operator control units, wireless communications, and general reliability. The work has collectively identified six emerging issues for future work. Based on these studies, a 10-hour, 1 continuing education unit credit course on rescue robotics has been created and is available. Rescue robots and sensors are available for loan upon request.
With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue
applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general
guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance
testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has
been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a
workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all
aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many
failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The
process also demonstrated that the testing by the manufacturer can provide important design data that can be used to
identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor
systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in
This paper describes an extension of scripts, which have been used to control sequences of robot behavior, to facilitate
human-robot coordination. The script mechanism permits the human to both conduct expected, complementary
activities with the robot and to intervene opportunistically taking direct control. Scripts address the six major issues
associated with human-robot coordination. They allow the human to visualize the robot's mental model of the situation
and build a better overall understanding of the situation and what level of autonomy or intervention is needed. It also
maintains synchronization of the world and robot models so that control can be seamlessly transferred between human
and robot while eliminating "coordination surprise". The extended script mechanism and its implementation in Java on
an Inuktun micro-VGTV robot for the technical search task in urban search and rescue is described.
Robot and sensor networks are needed for safety, security, and rescue applications such as port security and
reconnaissance during a disaster. These applications rely on real-time transmission of images, which generally saturate the
available wireless network infrastructure. Knowledge-based compression is a method for reducing the video frame
transmission rate between robots or sensors and remote operators. Because images may need to be archived as evidence
and/or distributed to multiple applications with different post processing needs, lossy compression schemes, such as MPEG,
H.26x, etc., are not acceptable. This work proposes a lossless video server system consisting of three classes of filters
(redundancy, task, and priority) which use different levels of knowledge (local sensed environment, human factors associated
with a local task, and relative global priority of a task) at the application layer of the network. It demonstrates the
redundancy and task filters for a realistic robot search scenario. The redundancy filter is shown to reduce the overall
transmission bandwidth by 24.07% to 33.42%, and, when combined with the task filter, reduces overall transmission
bandwidth by 59.08%to 67.83%. By itself, the task filter has the capability to reduce transmission bandwidth by 32.95% to
33.78%. While knowledge-based compression generally does not reach the same levels of reduction as MPEG, there are
instances where the system outperforms MPEG encoding.
This paper summarizes a study on refueling and rearming FCS-related vehicles in the field. In keeping with the FCS philosophy, the resupply process should be unmanned. For the purposes of the
study, a resupply (RS) system is defined as an autonomous robotic platform, which interacts with a combat vehicle (CV). The purpose of the interaction is transfer of liquid fuel and/or ammunition. The RS
may be capable of providing both the fuel and the ammunition simultaneously, or there may be separate resupply vehicles, each dedicated to one consumable. The CV may be resupplied while on-station and operational or may be taken out of service and moved to a resupply point.
The study proposed a resupply system, which consists of two RS vehicles (i.e., separate vehicles for fuel and ammunition) to refuel the CV. Four families of scenarios were considered: the RS moves to the CV ("door to door"), the RS and CV both move ("rendezvous"), the CV move the RS ("filling station"), and the CV move to a pod dropped nearby. The "door to door" scenario was rated the most feasible, with
the rendezvous scenario a close second.
The study ascertained that RS vehicles using a robotic manipulator for the transfer mechanism is based on best engineering practices and constitute a low risk design. The required level of autonomy to
accomplish resupply is teleoperation, though a mixed-initiative approach poses relatively low risk. A teleoperator or simple mixed-initiative system can be completed in 3 years, and offers significant
performance benefits. Full autonomy was determined to be too high risk, but mixed-initiative work could serve as a basis for evolving to full autonomy.
The study also considered the impact of emerging technologies on resupply. The key technical risks in ascending order of investment priority are: platform design, munitions transfer mechanism, and
human-robot interaction (HRI). The platform design and munitions transfer mechanism are lower risk than HRI, which is a relatively new aspect of system design. The key enabling technologies are range
sensing and terrain reasoning. Breakthroughs in these areas would lower the risk of full autonomy modes of operation.
Marsupial robots are a type of heterogeneous mobile robot team. A mother robot transports, supports, and recovers one or more daughter robots. This paper will cover the marsupial robot concept, the application of law enforcement, and recent results in collaborative teleoperation for the related task of urban search and rescue.
Since the 1995 Oklahoma City bombing and Kobe, Japan, earthquake, robotics researchers have been considering search and rescue as a humanitarian research domain. The recent devastation in Turkey and Taiwan, compounded with the new Robocup Rescue and AAAI Urban Search and Rescue robot competition, may encourage more research. However, roboticists generally go not have access to domain experts: the emergency workers or first providers. This paper shares our understanding of urban search and rescue, based on our active research in this area and training sessions with rescue workers from the Hillsborough County (Florida) Fire Departments. The paper is intended to be a stepping stone for roboticists entering the field.
This paper describes our progress in near-range (within 0 to 2 meters) ego-centric docking using vision under variable lighting conditions (indoors, outdoors, dusk). The docking behavior is fully autonomous and reactive, where the robot directly responds to the ratio of the number of pixels of two colored fiducials without constructing an explicit model of the landmark. This is similar to visual homing in insects and has a low computational complexity of O(n2) and a fast update rate. In order to accurately segment the colored fiducials under constrained lighting conditions, the spherical coordinate transform (SCT) color space is used, rather than RGB or HSV, in conjunction with an adaptive segmentation algorithm. Experiments with a daughter robot docking with a mother robot were collected. Results showed that 1) vision-based docking is faster than teleoperation yet equivalent in performance and 2) adaptive segmentation is more robust under challenging lighting conditions, including outdoors.
Teams of heterogeneous mobile robots are a key aspect of future unmanned system for operations in complex and dynamic urban environments, such as that envisioned by DARPA's Tactical Mobile Robotics program. One examples of an interaction among such team members is the docking of small robot of limited sensory and processing capability with a larger, more capable robot. Applications for such docking include the transfer of power, data, and materia, as well as physically combined maneuver or manipulation. A two-robot system is considered in this paper. The smaller 'throwable' robot contains a video camera capable of imaging the larger 'packable' robot and transmitting the imagery. The packable robot can both sense the throwable robot through an onboard camera, as well as sense itself through the throwable robot's transmitted video, and is capable of processing imagery from either source. This paper describes recent results in the development of control and sensing strategies for automatic mid-range docking of these two robots. Decisions addressed include the selection of which robot's image sensor to use and which robot to maneuver. Initial experimental results are presented for docking using sensor data from each robot.
This paper reviews on-going collaborative efforts between the Colorado School of Mines and Clark-Atlanta University in cooperative assistance for coordination and control of multiple vehicles. It reports on progress in developing an intelligent assistance agent (IAA) for aiding a human operator in diagnosing problems and generating recovery strategies in remote ground robots. The current work has focused on the identification and incorporation of categories of additional information from multiple robots and other agents. These categories are: mission-related sources, such as peer robots working nearby; facility- related sources, such as security cameras; and opportunistically available agents, such as overhead satellites or humans working in the area as part of another mission. The incorporation of additional sources of information requires enhancement to the previously developed teleVIA architecture. In particular, the teleVIA, IAA must provide more strategic management support, sophisticate viewpoint and data management and presentation, and simplified control of the additional agents for diagnosis and recovery activities. These enhancements are encapsulated in software agents within the IAA.
Occupancy grids are a common representation for mobile robot activities such as obstacle avoidance, map making, localization, and place recognition. An important issue is how to accurately update the grid with new sensor readings rapidly enough to support real-time navigation. The HIMM/VFH methodology works well for a robot navigating at high speeds, but the algorithms show poor performance at lower speeds in cluttered areas. Our approach to overcoming these deficiencies is twofold. First, Dempster-Shafer theory is used for fusion because it provides a well-understood updating scheme and has been demonstrated to have additional desirable properties. Second, the number of grid elements updated varies as a function of the robot's velocity. Experiments used with Clementine, a Denning-Branch MRV4 mobile robot, demonstrate that varying the beam width with the velocity of the robot improves the updating of an occupancy grid using Dempster-Shafer theory versus that of HIMM. Furthermore, the Dempster-Shafer method tends to handle noise better and make smoother and more realistic maps.
The Colorado School of Mines (CSM) entry placed fourth in the 1995 International Unmanned Ground Robotics Competition sponsored by the Association for Unmanned Vehicles (AUVS). Clementine 2, a battery powered children's jeep outfitted with a 100 MHz Pentium field computer, a camcorder, and a panning ultrasonic range finder served as the platform. The objectives of the CSM team were to gain familiarity with the CSM architecture by applying it to a well defined problem, evaluate existing computer vision based road following techniques, and gain practical experience in using multiple sensing modalities. The entry used the behavioral portion of the CSM hybrid deliberative/reactive architecture, which divided robot activities into four strategic and tactical behaviors: vision based follow-path, ultrasonic based avoid-obstacle, pan-camera, and speed-control using inclinometers. This paper details the motivation behind the CSM entry, the approach taken, and lessons learned.
In previous work, we have developed a generate, test, and debug methodology for detecting, classifying, and responding to sensing failures in autonomous and semi-autonomous mobile robots. An important issue has arisen from these efforts: how much time is there available to classify the cause of the failure and determine an alternative sensing strategy before the robot mission must be terminated? In this paper, we consider the impact of time for teleoperation applications where a remote robot attempts to autonomously maintain sensing in the presence of failures yet has the option to contact the local for further assistance. Time limits are determined by using evidential reasoning with a novel generalization of Dempster-Shafer theory. Generalized Dempster-Shafer theory is used to estimate the time remaining until the robot behavior must be suspended because of uncertainty; this becomes the time limit on autonomous exception handling at the remote. If the remote cannot complete exception handling in this time or needs assistance, responsibility is passed to the local, while the remote assumes a `safe' state. An intelligent assistant then facilitates human intervention, either directing the remote without human assistance or coordinating data collection and presentation to the operator within time limits imposed by the mission. The impact of time on exception handling activities is demonstrated using video camera sensor data.
This paper presents an applied practical comparison of Bayesian and Dempster-Shafer techniques useful for managing uncertainty in sensing. Three formulations of the same example are presented: a Bayesian, a naive Dempster-Shafer, and a Dempster-Shafer approach using a refined frame of discernment. Both the Bayesian and Dempster-Shafer (with a refined frame of discernment) yield similar results; however, information content and representations are different between the two methods. Bayesian theory requires a more explicit formulation of conditioning and the prior probabilities of events. Dempster-Shafer theory embeds conditioning information into its belief function and does not rely on prior knowledge, making it appropriate for situations where it is difficult to either collect or posit such probabilities, or isolate their contribution.
This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.
In this paper we present a method for ultrasonic robot localization without a priori world models utilizing the ideas of distinctive places and open space attraction. This method was incorporated into a move-to-station behavior, which was demonstrated on the Georgia Tech mobile robot. The key aspect of our approach was to use Dempster-Shafer theory to overcome the problem of the uncertainty in the range measurements returned by the sensors. The state of the world was modeled as a two element frame of discernment (Theta) : empty and occupied. The world itself was represented as a grid, with the belief in whether a grid element was empty or occupied was set to total ignorance (don't know) at the beginning of the robot behavior. A belief model of the range readings was used to compute the belief of points in the environment being empty, occupied, or unknown. Belief from repeated measurements updated the world map according to Dempster's rule of combination. The current belief in the empty space was used to construct a weighted centroid of the empty space (or station) after each move of the robot. By moving toward this center of mass and continually adding to the beliefs of the points in the environment the robot iteratively moved to the center of the open space. Experiments demonstrated that the robot was able to localize itself with a repeatability of 1.5 feet in a 33 foot square room, regardless of the starting position within the open space. This method is contrasted with a technique which did not explicitly model the belief in the range readings; that technique was unable to consistently converge on the center of the room within ten moves.
This paper presents a state-based control scheme for sensor fusion in autonomous mobile robots. States specify the sensing strategy for each sensor; the feedback rule to be applied to the sensors; and a set of failure conditions, which signal abnormal or inconsistent evidence. Experiments were conducted in the surveillance domain, where the robot was to determine if three different areas in a cluttered tool room remained unchanged after each visit. The data collected from four sensors (a Sony Hi8 color camcorder, a Pulnix black and white camera, an Inframetrics true infrared camera, and Polaroid ultrasonic transducers) and fused using the sensor fusion effects architecture (SFX) support the claims that the state-based control scheme produces percepts which are consistent with the scene being viewed, can improve the global belief in a percept, can improve the sensing quality of the robot, and it robust under a variety of conditions.
The combination of imperfect evidence contributed by different sensors is a basic problem for sensor fusion in autonomous mobile robots. Current implementations of sensor fusion systems are restricted to fusing only certain classes of evidence because of the lack of a general framework for the combination of evidence. The authors approach to this problem is to first develop a model of the sensor fusion without committing to a particular theory of evidence, then to formulate a combination of evidence framework based on the requirements of the model. Their previous work has proposed such a model. This paper discusses the evidential demands of the model and one possible implementation using Dempster-Shafer theory. Three drawbacks of DS theory (computational intractability, weak assumptions of statistical independence, and counterintuitive averaging of strongly biased evidence) are eliminated by applying DS theory within the constraints of the model. An example based on simulated sensor data illustrates this application of Dempster-Shafer theory.
Sensor fusion in robotics, particularly for navigation of autonomous mobile robots, has typically been addressed as a “bottom-up” or data driven process. This has led to a variety of systems that, although somewhat successful, have been difficult to expand to include additional sensors or extend to other domains. The approach taken here is to specify and develop a control scheme which considers the sensor fusion process in the context of the intended actions of the robot, knowledge of the environment, and the available sensor suite.
The resulting control scheme exploits environmental knowledge in three ways in order to reduce processing. First, the control structure supports adaptation of the sensor fusion process to the environment and intended action. An appropriate set of candidate features is selected from the feature extraction library during the investigatory phase. Fusion occurs during the performatory phase in one of three global states: complete sensor fusion; fusion with the possibility of discordance and resultant recalibration of dependent perceptual sources; and fusion with the possibility of discordance and resultant suppression of discordant perceptual sources. Second, the states themselves use environmental knowledge to improve the fusion results as well as the sensing quality. Knowledge about how a sensor behaves under certain environmental conditions can lead to the exclusion of suspect readings from the fusion process. Third, the control scheme allows the system to respond to unexpected or catastrophic changes in the environment or sensors by permitting transitions between states. When an unacceptable discordance is detected between features, the investigatory phase is re-invoked, the system reconfigured, and instantiated in a new state.