Autonomous mobile robotic teams are increasingly used in exploration of indoor environments. Accurate modeling of the world around the robot and describing the interaction of the robot with the world greatly increases the ability of the robot to act autonomously. This paper demonstrates the ability of autonomous robotic teams to find objects of interest. A novel feature of our approach is the object discovery and the use of it to augment the mapping and navigation process. The generated map can then be decomposed into semantic regions while also considering the distance and line of sight to anchor points. The advantage of this approach is that the robot can return a dense map of the region around an object of interest. The robustness of this approach is demonstrated in indoor environments with multiple platforms with the objective of discovering objects of interest.
Mobile robots are already widely used by first responders both in civilian and military operations. Our current goal is to provide the human team with all the information available from an unknown environment quickly and accurate. Also, the robots need to explore autonomous because tele-operating more than two robots is very difficult and demands one person per robot to do it.
In this paper the authors describe the results of several experiments on behalf of the MAST CTA. Our exploration strategies developed for the experiments use from two to nine robots which sharing information are able to explore and map an unknown environment. Each robot has a local map of the environment and transmit the measurements information to a central computer which fusion all the data to make a global map. This computer called map coordinator send exploration goals to the robot teams in order to explore the environment in the fastest way available. The performance of our exploration strategies were evaluated in different scenarios and tested in two different mobile robot platforms.
Tactical situational awareness in unstructured and mixed indoor / outdoor scenarios is needed for urban combat as well as rescue operations. Two of the key functionalities needed by robot systems to function in an unknown environment are the ability to build a map of the environment and to determine its position within that map. In this paper, we present a strategy to build dense maps and to automatically close loops from 3D point clouds; this has been integrated into a mapping system dubbed OmniMapper. We will present both the underlying system, and experimental results from a variety of environments such as office buildings, at military training facilities and in large scale mixed indoor and outdoor environments.
The multi-robot patrolling task has practical relevance in surveillance, search and rescue, and security appli-
cations. In this task, a team of robots must repeatedly visit areas in the environment, minimizing the time
in-between visits to each. A team of robots can perform this task efficiently; however, challenges remain related
to team formation and task assignment.
This paper presents an approach for monitoring patrolling performance and dynamically adjusting the task
assignment function based on observations of teammate performance. Experimental results are presented from
realistic simulations of a cooperative patrolling scenario, using a team of UAVs.
Auction based methods are often used to perform distributed task allocation on multi-agent teams. Many
existing approaches to auctions assume fully cooperative team members. On in-situ and dynamically formed
teams, reciprocal collaboration may not always be a valid assumption.
This paper presents an approach for dynamically selecting auction partners based on observed team member
performance and shared reputation. In addition, we present the use of a shared reputation authority mechanism.
Finally, experiments are performed in simulation on multiple UAV platforms to highlight situations in which it
is better to enforce cooperation in auctions using this approach.
Efficient and accurate 3D mapping is desirable in disaster recovery as well as urban warfare situations. The
speed with which these maps can be generated is vital to provide situational awareness in these situations. A
team of mobile robots can work together to build maps more quickly. We present an algorithm by which a team
of mobile robots can merge 2D and 3D measurements to build a 3D map, together with experiments performed
at a military test facility.
In distributed, heterogeneous, multi-agent teams, agents may have different capabilities and types of sensors.
Agents in dynamic environments will need to cooperate in real-time to perform tasks with minimal costs. Some
example scenarios include dynamic allocation of UAV and UGV robot teams to possible hurricane survivor
locations, search and rescue and target detection.
Auction based algorithms scale well because agents generally only need to communicate bid information. In
addition, the agents are able to perform their computations in parallel and can operate on local information.
Furthermore, it is easy to integrate humans and other vehicle types and sensor combinations into an auction
framework. However, standard auction mechanisms do not explicitly consider sensors with varying reliability.
The agents sensor qualities should be explicitly accounted. Consider a scenario with multiple agents, each
carrying a single sensor. The tasks in this case are to simply visit a location and detect a target. The sensors
are of varying quality, with some having a higher probability of target detection. The agents themselves may
have different capabilities, as well. The agents use knowledge of their environment to submit cost-based bids for
performing each task and an auction is used to perform the task allocation. This paper discusses techniques for
including a Bayesian formulation of target detection likelihood into this auction based framework for performing
task allocation across multi-agent heterogeneous teams. Analysis and results of experiments with multiple air
systems performing distributed target detection are also included.
This paper describes the results of a Joint Experiment performed on behalf of the MAST CTA. The system developed for the Joint Experiment makes use of three robots which work together to explore and map an unknown environment. Each of the robots used in this experiment is equipped with a laser scanner for measuring walls and a camera for locating doorways. Information from both of these types of structures is concurrently incorporated into each robot's local map using a graph based SLAM technique.
A Distributed-Data-Fusion algorithm is used to efficiently combine local maps from each robot into a shared global map. Each robot computes a compressed local feature map and transmits it to neighboring robots, which allows each robot to merge its map with the maps of its neighbors. Each robot caches the compressed maps from its neighbors, allowing it to maintain a coherent map with a common frame of reference.
The robots utilize an exploration strategy to efficiently cover the unknown environment which allows collaboration on an unreliable communications channel. As each new branching point is discovered by a robot, it broadcasts the information about where this point is along with the robot's path from a known landmark to the other robots. When the next robot reaches a dead-end, new branching points are allocated by auction. In the event of communication interruption, the robot which observed the branching point will eventually explore it; therefore, the exploration is complete in the face of communication failure.
Mobile manipulation in many respects represents the next generation of robot applications. An important part
of design of such systems is the integration of techniques for navigation, recognition, control, and planning to
achieve a robust solution. To study this problem three different approaches to mobile manipulation have been
designed and implemented. A prototypical application that requires navigation and manipulation has been
chosen as a target for the systems. In this paper we present the basic design of the three systems and draw some
general lessons on design and implementation.
Robots have been successfully deployed within bomb squads all over the world for decades. Recent technical
improvements are increasing the prospects to achieve the same benefits also for other high risk professions. As the
number of applications increase issues of collaboration and coordination come into question. Can several groups deploy
the same type of robot? Can they deploy the same methods? Can resources be shared? What characterizes the different
applications? What are the similarities and differences between different groups?
This paper reports on a study of four areas in which robots are already, or are about to be deployed: Military Operations
in Urban Terrain (MOUT), Military and Police Explosive Ordnance Disposal (EOD), Military Chemical Biological
Radiological Nuclear contamination control (CBRN), and Fire Fighting (FF). The aim of the study has been to achieve a
general overview across the four areas to survey and compare their similarities and differences. It has also been
investigated to what extent it is possible for the them to deploy the same type of robot.
It was found that the groups share many requirements, but, that they also have a few individual hard constrains. A
comparison across the groups showed the demands of man-portability, ability to access narrow premises, and ability to
handle objects of different weight to be decisive; two or three different sizes of robots will be needed to satisfy the need
of the four areas.
The military have a considerable amount of experience from using robots for mine clearing and bomb removal. As new
technology emerges it is necessary to investigate the possibly to expand robot use. This study has investigated an Army
company, specialized in urban operations, while fulfilling their tasks with the support of a PackBot Scout. The robot was
integrated and deployed as an ordinary component of the company and included modifying and retraining a number of
standard behaviors to include the robot. This paper reports on the following issues: evaluation of missions where the
platform can be deployed, what technical improvements are the most desired, and what are the new risks introduced by
use of robots? Information was gathered through observation, interviews, and a questionnaire.
The results indicate the robot to be useful for reconnaissance and mapping. The users also anticipated that the robot
could be used to decrease the risks of IEDs by either triggering or by neutralising them with a disruptor. The robot was
further considered to be useful for direct combat if armed, and for placing explosive loads against, for example, a door.
Autonomous rendering of maps, acquiring images, two-way audio, and improved sensing such as IR were considered
important improvements. The robot slowing down the pace of the unit was considered to be the main risk when used in
The excimer laser has proven to be the laser of choice in various biomedical applications for both soft and hard tissues. The excimer laser-tissue interaction is vastly different from other lasers due to the high energies of each photon, the short pulse duration, and small volume of tissue effected. In addition to the particle ejection, heat generation and spectral emission, the interaction also produces acoustical disturbances in both the air and in the tissue. The plume dynamics were detected with a second laser (Nd;YAG at 532 nm) illuminating the particles and a CCD camera detecting the (90°) scattered radiation to form an image. A similar setup was used to detect the acoustical disturbances, but this time the forward scattered radiation off of the information about these acoustical disturbance we designed and built an ultrasonic probe to do so. The luminescence was measured with a time resolved spectroscopy system. The thermal effects were measured with a thermal camera. By measuring these different effects our understanding of the interaction is enhanced, the parameters for a specific medical laser application can be optmized for the best results, and each one can be used as a real-time (before the next pulse) feedback control system.
In this paper we demonstrate how principles of multiple objective decision making (MODM) can be used to analysis, design and implement multiple behavior based systems. A structured methodology is achieved where each system objective, such as obstacle avoidance or convoying, is modeled as a behavior. Using MODM we formulate mechanisms for integrating such behaviors into more complex ones. A mobile robot navigation example is given where the principles of MODM are demonstrated. Simulated as well as real-world experiments show that a smooth blending of behaviors according to the principles of MODM enables coherent robot behavior.
In this paper we describe an approach to continuous scene modeling for an autonomous mobile robot navigation system operating in indoor environments. The continuous scene modeling is based on a cooperative sensor system that comprises two parts: binocular region based stereo, i.e., a passive depth extraction technique, and depth from focus, i.e., an active depth extraction technique. The region based stereo technique provides an overview of the scene aided by a 3D a priori world model. Since feature based stereo is vulnerable to occlusions, a depth from focus technique is selectively employed at locations where potential occlusions are detected in order to extract the correct depth. Scene maintenance over time is done by generation of expectation images, based on previously sensed scene objects, that are matched with images, recorded by the on-robot stereo camera head. This match allows for detection of previously undetected scene objects and for updating of already known scene objects. The operation of the system is demonstrated on an in-door image sequence.
To achieve continuous operation and thus facilitate use of vision in a dynamic scenario, it is necessary to introduce a purpose for the visual processing. This provides information that may control the visual processing and thus limits the amount of resources needed to obtain the required results. A proposed architecture for vision systems is presented, along with an architecture for visual modules. This architecture enables both goal and data driven processing, with a potentially changing balance between the two modes. To illustrate the potential of the proposed architecture, a sample system for recovery of scene depth is presented, with experimental results which demonstrate a scalable performance.
Interpretation of images is a context dependent activity and, therefore, saturated with uncertainty. It is outlined how causal probabilistic networks (CPNs) together with strict and efficient Bayesian methods can be used for modeling contexts and for interpretation of findings. For illustration purposes a 2-agent system consisting of an interpreter using a CPN and a findings catcher using an image processor is designed. It is argued that the architecture should be a system of agents with instincts, each of them acting to improve their own situation. Going through an interpretation session, it is shown how the Bayesian paradigm very neatly supports the agents-with-instincts control paradigm such that the system through private benefit maximizing in an efficient way reaches its goal.
Active vision is an area that has received increased attention over the past few years. LIA/AUC, an active research area, uses active vision for geometric scene modeling and interpretation. In order to pursue this research, a binocular robot camera head has been constructed. In this manuscript, the basic design of the head is outlined and a prototype that has been constructed is described in some detail. Detailed specifications of the components are provided together with a section on lessons learned from the construction of the prototype.