Generally, there are multiple sensor suites on existing rover platforms such as NASA's Sample Return Rover (SRR) and the Field Integrated Design and Operations (FIDO) rover at JPL. Traditionally, these sensor suites have been used in isolation for such tasks as planetary surface traversal. For example, although distant obstacle information is known from the narrow FOV navigation camera (NAVCAM) suite on SRR or FIDO, it is not explicitly used at this time for augmentation of the wide FOV hazard camera (HAZCAM) information for obstacle avoidance. This paper describes the development of advanced rover navigation techniques. These techniques include an algorithm for the generation of range maps using the fusion of information from the NAVCAMs and HAZCAMs, and an algorithm for registering range maps to an a priori model-based range map for relative rover position and orientation determination. Experimental result for each of these techniques are documented in this paper.
In this study a new Extended Information Filter (EIF) algorithm is applied to compute the heat transfer parameters for lumped heat exchanger. Many industrial applications depend on the prediction of the heat exchanger parameters. This algorithm produces an accurate prediction of the operating states and parameters. A state variable model is derived from the empirical correlation of the lumped heat exchanger. The derived system is nonlinear and stochastic. The problem of estimating the state variables and parameters is considered in the presence of random disturbance and measurement noise. The EIF is then used to produce an optimal estimate of the state and parameters of the heat exchanger. The result obtained by using EIF were compared to the results obtained by using EIF were compared to the result obtained using EKF and found that the estimation of dynamic nonlinear systems, is best carried out using the EIF rather than the EKF.
In this paper we present a new technique for 3D free-form object recognition using neural networks, and a novel surface representation scheme. This new scheme encodes the 3D surface information into a 2D image. This 2D image corresponds to a certain point on the surface. This image is invariant to both position and orientation and is unique for this point. Therefore, we called this image Surface Point Signature (SPS). Using specially designed neural networks, the SPS images are used in the matching and recognition of 3D objects in a 3D-scanned scene.
Fusing information of multiple sensor is particularly difficult if the sensor systems which provide the information have very different characteristics such as different data formats, reliabilities, signal to noise ratios, sampling rates and so on. Furthermore, the information is often provided on different levels of abstraction such as the direct sensor output in contrast to expert knowledge or a priori information. We propose a new approach to sensor fusion which accounts for these problems. The basic idea is to represent the quantity to estimate as the state variable of a nonlinear dynamical system. The sensor signals act on this dynamics by specifying attractors with limited range of influence. The dynamics relaxes into a stable state which results from the superposition of the attractors. By means of the limited attractor ranges, the dynamics automatically averages nonlinearly over corresponding sensor signal while outliers stemming form temporarily de-calibrated or erroneous sensor are discarded. Self-calibration is achieved by representing also the sensor signals as dynamical states and specifying an attractor at the position of the fused estimate. By using the unified attractor representation, abstract information can be treated in the same way as direct sensor input. Furthermore a mathematically well defined and algebraically analyzable format for dynamic sensor information on various levels of abstraction is available. We verify our concept for the example of man-machine interaction: fusing visual and odometric sensor information for the autonomous position estimation with acoustic guidance information for the target acquisition of a mobile robot.
This paper presents the usage of Hall-Effect sensors for the creation of a low cost, zero friction, minimally invasive precise positional sensor. Hall-Effect sensors are generally being used for revealing the speed of geared-wheels as well as the angular position of wheels having an array of magnets mounted on. In most cases the Hall-Effect sensor have only been used as threshold sensing devices for revealing the presence of magnetic field.
As part of a project for the Defense Advanced Research Projects Agency, Sandia National Laboratories' Intelligent Systems and Robotics Center is developing and testing the feasibility of a cooperative team of robotic sentry vehicles to guard a perimeter and to perform a surround task. This paper describes on-going activities in the development of these robotic sentry vehicles. To date, we have developed a robotic perimeter detection system which consists of eight 'Roving All Terrain Lunar Explorer Rovers' (RATLER), a laptop-based base-station, and several Miniature Intrusion Detection Sensors (MIDS). A radio frequency receiver on each of the RATLER vehicles alerts the sentry vehicles of alarms from the hidden MIDS. When an alarm is received, each vehicle decides whether it should investigate the alarm based on the proximity of itself and the other vehicles to the alarm. As one vehicle attends an alarm, the other vehicles adjust their position around the perimeter to better prepare for another alarm. For the surround task, both potential field and A* search path planners have been added to the base-station and vehicles. At the base-station, the operator specifies goal and exclusion regions on a GIS map. The path planner generates vehicles paths that are previewed by the operator. Once the operator has validated the path, the appropriate information is downloaded t the vehicles. For the potential field path planner, the polygons and line segments that represent the obstacles and goals are downloaded to the vehicles, instead of the simulated paths. On board the vehicles, the same potential field path planner generates the path except that it uses the true location of itself and the nearest neighboring vehicle. For the A* path planner, the actual path is downloaded to the vehicles because of limited on-board computational power.
In many cases cooperation between robots is implemented using explicit, perhaps complex, coordination protocols. However, research in behavior-based multirobot systems suggest that effective cooperative teams can be composed of agents using simple individual agent behaviors with limited or no communication. In this paper we prose behavioral diversity as an alternative cooperative strategy. Behavioral diversity refers to the extent to which agents for various components of the task. It is not always the case, however, that diversity is advantageous. Results of experiments in robotics soccer and multirobot foraging tasks indicate that the utility of diversity depends on the task. This paper describes behaviorally diverse solutions to these task and provides a comparison that suggests why some tasks are suited for behavioral diversity and others are not.
We show how various levels of coordinated behavior may be achieved in a group of mobile robots by using a model of the interaction dynamics between a robot and the environment. We present augmented Markov models (AMMs) as a tool for capturing such interaction dynamics on-line an in real-time, with little computational and storage overhead. We briefly describe the structure of AMMs, then demonstrate the application of the model for resolving group coordination issues arising from three sources: individual performance, group affiliation, and group performance. Corresponding respectively to these are the three experimental examples we present - fault detection, group membership based on ability and experience, and dynamic leader selection.
Mobile robot hardware and software is developing to the point where interesting applications for groups of such robots can be contemplated. We envision a set of mobots acting to map and perform surveillance or other task within an indoor environment (the Sense Net). A typical application of the Sense Net would be to detect survivors in buildings damaged by earthquake or other disaster, where human searchers would be put a risk. As a team, the Sense Net could reconnoiter a set of buildings faster, more reliably, and more comprehensibly than an individual mobot. The team, for example, could dynamically form subteams to perform task that cannot be done by individual robots, such as measuring the range to a distant object by forming a long baseline stereo sensor form a pari of mobots. In addition, the team could automatically reconfigure itself to handle contingencies such as disabled mobots. This paper is a report of our current progress in developing the Sense Net, after the first year of a two-year project. In our approach, each mobot has sufficient autonomy to perform several tasks, such as mapping unknown areas, navigating to specific positions, and detecting, tracking, characterizing, and classifying human and vehicular activity. We detail how some of these tasks are accomplished, and how the mobot group is tasked.
Preparation of planetary surface sites prior to a manned mission can be accomplished through the use of a robotic colony. The task of such a colony would include habitat deployment, setup of in-situ fuel and oxygen production plants, and beaconed road placement. The colony will have to posses a great deal of autonomy for this ambitious list. BISMARC is a behavior based system for the control of multiple rovers on planetary surfaces. During the past few years the system has performed well in multiple cache retrieval simulations, and a certain degree of fault tolerance has been included in the design. In this paper we address the extensions to BISMARC that would be necessary for a robotic colony application. These extensions include a wider array of behaviors, better communication and mapping capabilities, and fault tolerance shared by the colony. The results of some simulations for habitat site preparation are reported.
While considerable progress has been made in recent years toward the development of multi-robot teams, much work remains to be done before these teams are used widely in real-world applications. Two particular needs toward this end are the development of mechanisms that enable robot teams to generate cooperative behaviors on their own, and the development of technique that allow these teams to autonomously adapt their behavior over time as the environment or the robot team changes. This paper proposes the use of the Cooperative Multi-Robot Observation of Multiple Moving Targets (CMOMMT) applications as a rich domain for studying the issues of multi-robot learning and adaptation. After discussing the need for learning and adaptation in multi-robot teams, this paper describes the CMOMMT application and its relevance to multi-robot learning. We discuss the result of the previously-developed, hand-generated algorithm for CMOMMT and the potential for learning that was discovered from the hand-generated approach. We then describe the early work that has been done to generate multi-robot learning techniques for the CMOMMT application, as well as our ongoing research to develop approaches that give performance as good, or better, than the hand-generated approach. The ultimate goal of this research is to develop techniques for multi-robot learning and adaptation in the CMOMMT application domain that will generalize to cooperative robot applications in other domains, thus making the practical use of multi-robot teams in a wide variety of real-world applications much closer to reality.
In the distributed object recognition problem at least two robots are placed randomly in an unknown environment. The robots have to identify the same object in the environment. We describe a solution to distributed object recognition that computes the transformation of coordinates between two robots' local coordinate frames. This transformation is then used as a translator between the robots' images. We present experimental results from an implementation of this algorithm.
Teams of heterogeneous mobile robots are a key aspect of future unmanned system for operations in complex and dynamic urban environments, such as that envisioned by DARPA's Tactical Mobile Robotics program. One examples of an interaction among such team members is the docking of small robot of limited sensory and processing capability with a larger, more capable robot. Applications for such docking include the transfer of power, data, and materia, as well as physically combined maneuver or manipulation. A two-robot system is considered in this paper. The smaller 'throwable' robot contains a video camera capable of imaging the larger 'packable' robot and transmitting the imagery. The packable robot can both sense the throwable robot through an onboard camera, as well as sense itself through the throwable robot's transmitted video, and is capable of processing imagery from either source. This paper describes recent results in the development of control and sensing strategies for automatic mid-range docking of these two robots. Decisions addressed include the selection of which robot's image sensor to use and which robot to maneuver. Initial experimental results are presented for docking using sensor data from each robot.
This paper describes the design and construction of a cooperative, heterogeneous robot group comprised of one semi-autonomous aerial robot and two autonomous ground robots. The robots are designed to perform automated surveillance and reconnaissance of an urban outdoor area using onboard sensing. The ground vehicles have GPS, sonar for obstacle detection and avoidance, and a simple color- based vision system. Navigation is performed using an optimal mixture of odometry and GPS. The helicopter is equipped with a GPS/INS system, a camera, and a framegrabber. Each robot has an embedded 486 PC/104 processor running the QNX real-time operating system. Individual robot controllers are behavior-based and decentralized. We describe a control strategy and architecture that coordinates the robots with minimal top- down planning. The overall system is controlled at high level by a single human operator using a specially designed control unit. The operator is able to task the group with a mission using a minimal amount of training. The group can re-task itself based on sensor inputs and can also be re- tasked by the operator. We describe a particular reconnaissance mission that the robots have been tested with, and lessons learned during the design and implementation. Our initial results with these experiments are encouraging given the challenging mechanics of the aerial robot. We conclude the paper with a discussion of ongoing and future work.
We investigate teams of compete autonomous agents that can collaborate towards achieving precise objectives in an adversarial dynamic environment. We have pursued these two frameworks emphasizing their different technical challenges. Creating effective members of a team is a challenging research problem. We first address this issue by introducing a team architecture organization which allows for a rich task decomposition between team members. The main contribution of this paper is our introduction of an action- selection algorithm that allows for a teammate to anticipate the needs of other teammates. Anticipation is critical for maximizing the probability of successful collaboration in teams of agents. We show how our contribution applies to the two concrete robotic soccer frameworks and present controlled empirical result run in simulation. Anticipation was successfully used by both our CMUnited-98 simulator and CMUnited-98 small-robot teams in the RoboCup-98 competition. The two teams are RoboCup-98 world champions each in its own league.
In the framework of our biologically inspired robotics approach, we describe a visually-guided demonstration model aircraft, the attitude of which is stabilized in yaw by means of a novel, non-emissive optical sensor having a small visual field. This aircraft incorporates a miniature scanning sensor consisting of tow photoreceptors with adjacent visual axes, driving a Local Motion Detector (LMD), which are made to perform a low-amplitude scanning at a varying angular speed. Under these conditions, the signal output from the motion detector varies gradually with the angular position of a contrasting object placed in its visual field, actually making the complete system a non- emissive optical 'position sensor'. Its output, remarkably, (i) varies quasi-linearly with the angular position of the contrasting object, and (ii) remains largely invariant with response to the distance to the object, and its degree of contrast. We built a miniature, twin-engine, twin-propeller aircraft equipped with this visual position sensor. After incorporating the sensor into a visuomotor feedback loop enhanced by an inertial sensor, we established that the 'sighted aircraft' can fixate and track a dark edge placed in its visual field, thus opening the way for the development of visually-guided system for controlling the attitude of micro-air vehicles, of the kind observed in insects such as hover-flies.
Constructive Biology means understanding biological mechanisms through building systems that exhibit life-like properties. Applications include learning engineering tricks from biological system, as well as the validation in biological modeling. In particular, biological system in the course of development and experience become temporally grounded. Researchers attempting to transcend mere reactivity have been inspired by the drives, motivations, homeostasis, hormonal control, and emotions of animals. In order to contextualize and modulate behavior, these ideas have been introduced into robotics and synthetic agents, while further flexibility is achieved by introducing learning. Broadening scope of the temporal horizon further requires post-reactive techniques that address not only the action in the now, although such action may perhaps be modulated by drives and affect. Support is needed for expressing and benefitting from pats experiences, predictions of the future, and form interaction histories of the self with the world and with other agents. Mathematical methods provide a new way to support such grounding in the construction of post-reactive systems. Moreover, the communication of such mathematical encoded histories of experience between situated agents opens a route to narrative intelligence, analogous to communication or story telling in societies.
This paper presents an experiment in which evolutionary algorithms are used for the development of neural controllers for salamander locomotion. The aim of the experiment is to investigate which kind of neural circuitry can produce the typical swimming and trotting gaits of the salamander, and to develop a synthetic approach to neurobiology by using genetic algorithms as design tool. A 2D bio-mechanical simulation of the salamander's body is developed whose muscle contraction is determined by the locomotion controller simulated as continuous-time neural networks. While the connectivity of the neural circuitry underlying locomotion in the salamander has not been decoded for the moment, the general organization of the designed neural circuits corresponds to that hypothesized by neurobiologist for the real animal. In particular, the locomotion controllers are based on a body central pattern generator (CPG) corresponding to a lamprey-like swimming controller as developed by Ekeberg, and are extended with a limb CPG for controlling the salamander's body. A genetic algorithm is used to instantiate synaptic weights of the connections within the limb CPG and from the limb CPG to the body CPG given a high level description of the desired gaits. A set of biologically plausible controllers are thus developed which can produce a neural activity and locomotion gaits very similar to those observed in the real salamander. By simply varying the external excitation applied to the network, the speed, direction and type of gait can be varied.
Emergence of stable gaits in locomotion robots is studied in this paper. A classifier system, implementing an instance- based reinforcement learning scheme, is used for sensory- motor control of an eight-legged mobile robot. Important feature of the classifier system is its ability to work with the continuous sensor space. The robot does not have a prior knowledge of the environment, its own internal model, and the goal coordinates. It is only assumed that the robot can acquire stable gaits by learning how to reach a light source. During the learning process the control system, is self-organized by reinforcement signals. Reaching the light source defines a global reward. Forward motion gets a local reward, while stepping back and falling down get a local punishment. Feasibility of the proposed self-organized system is tested under simulation and experiment. The control actions are specified at the leg level. It is shown that, as learning progresses, the number of the action rules in the classifier systems is stabilized to a certain level, corresponding to the acquired gait patterns.
A method is presented for distributed position control of the tip of a high degree of freedom tentacle. The scheme employs limited communications which occurs only between adjacent degrees of freedom and are distributed as identical processes at joint. An iterative approach to position control allows automatic path planning around obstacles to occur. The resulting system allows complex tasks to be performed with limited computation and sensing resources. The algorithm was used as a basis for locomotion for a mobile robot with four tentacles.
New generations of modular and reconfigurable robotic systems with many degrees of freedom can be transformed to achieve different functions, modes of manipulation, and means of mobility resulting in efficient multifunctional systems which adapt to complex environments. The design of modular distributed algorithms and architectures for control of these systems is particularly challenging since kinematic and dynamic performance must be maintained throughout a range of alternative physical reconfigurations. The 'Tetrobot' is a prototype modular system using parallel, variable geometry truss-like mechanisms which can be reconfigured to create moving platforms, walking machines, manipulator arms, a pipe crawler and other devices. Modular algorithms for distributed control and dynamic redundancy resolution of these system will be discussed, and the principles of distributed control for modular systems generalize beyond these specific mechanisms. The resulting Tetrobot system has a range of interesting applications including space robotics, construction, mining, medical, undersea, and flexible manufacturing.
A concept of a self-repairable mechanical system which is composed of homogeneous mechanical units is described. We have developed a modular system capable of 'self-assembly' and self-repair. The former means a set of units can form a given shape of the system without outside help, and the latter means the system restores the original shape if an arbitrary part of the system cut off. We show both of 2D and 3D unit design, and distributed algorithm for the units.
The Accident Response Mobile Manipulator System (ARMMS) is a teleoperated emergency response vehicle that deploys two hydraulic manipulators, five cameras, and an array of sensors to the scene of an incident. It is operated from a remote base station that can be situated up to four kilometers away from the site. Recently, a modular telerobot control architecture called SMART was applied to ARMMS to improve the precision, safety, and operability of the manipulators on board. Using SMART, a prototype manipulator control system was developed in a couple of days, and an integrated working system was demonstrated within a couple of months. New capabilities such as camera-frame teleoperation, autonomous tool changeout and dual manipulator control have been incorporated. The final system incorporates twenty-two separate modules and implements seven different behavior modes. This paper describes the integration of SMART into the ARMMS system.
A modular reconfigurable parallel robot is designed and constructed for precision assembly and light machining tasks using a set of standard actuator components. Kinematic calibration of the reconfigurable system is necessary to enhance its positioning accuracy. A kinematics calibration method is posed for a class of 3-legged reconfigurable parallel robots based on the local frame representation of the Product-of-Exponential formula. In this method, both revolute and prismatic joint axes can be uniformly expressed in the twist coordinates by their respective local link frames. Since these frames can be arbitrary defined on the links, we are able to redefine a set of new local frames to describe the actual kinematics of the robot in the presence of kinematic eros. The kinematics calibration becomes a procedure of identifying the newly defined frames. These new frames are then used to update the nominal pose of the mobile platform. The kinematic error model of the robot is formulated based on the theory of differential geometry. The complex optimization method is employ for the identification of the kinematic parameters. A simulation example of calibration a 3-legged modular parallel robot showed that the average positioning accuracy of the mobile platform improved significantly after calibration.
Metamorphic robots are an emerging field in which robotics can dynamically reconfigure shape and size not only for individual roots but also for complex structures that are formed by multiple robots. Such capability is highly in tasks such as fire fighting, earthquake rescue, and battlefield scouting, where robots must go through unexpected situations and obstacles and perform tasks that are difficult for fixed-shape robots. This research direction present a number of technical research challenges. Specifically, metamorphic robots must be able to decompose and reassemble at will from a set of basic connectable modules. Such modules must be small, self-sufficient and relatively homogeneous. In this paper, we present our approach to address these issue and describe the design of the CONRO modules. These modules are equipped with a low power micro-processor, memory chips, sensors, actuators, power supplies, and miniature mechanical connectors used for communication and power sharing. We will also describe a set of control mechanisms for controlling gaits and reconfigurations. We conclude the paper with a status report of the CONRO project and a list of the future work needed to fully realize the construction of the CONRO metamorphic robots.
Modular self-reconfigurable robots consist of large numbers of identical modules that possess the ability to reconfigure into different shapes as required by the task at hand. For example, such a robot could start out as a snake to traverse a narrow pipe, then re-assemble itself into a six-legged spider to move over uneven terrain, growing a pair of arms to pick up and manipulate an object at the same time. This paper examines the self-reconfigurable problem and present a divide-and-conquer strategy to solve reconfiguration for a class of problems referred to as closed-chain reconfiguration. This class includes reconfigurable robots whose topologies are described by 1D combinatorial topology. A robot topology is first decomposed into a hierarchy of small 'substrates' belonging to a finite set. Basic reconfiguration operations between the substructures in the set are precomputed, optimized and stored in a lookup table. The entire reconfiguration then consists of an ordered series of simple, precomputed sub-reconfigurations happening locally among the substructures.
In this manuscript, we introduce I(CES)-Cubes, a class of 3D modular robotic system that is capable of reconfiguring itself in order to adapt to its environment. This is a bipartite system, i.e. a collection of (i) active elements capable of actuation, and (ii) passive elements acting as connectors between actuated elements. Active elements, called links, are 3-DOF manipulators that are capable of attaching/detaching themselves to/from the passive elements. The cubes can then be positioned and oriented using links, which are independent mechatronic elements. Self- reconfiguration property enables the system to performed locomotion tasks over difficult terrain. For example, the system would be capable of moving over obstacles and climbing stairs. These task are performed by positing and orienting cubes and links to form a 3D network with required shape and position. This paper describes the design of the passive and active elements, the attachment mechanics, and several reconfiguration scenarios. Specifics of the hardware implementation and result of experiments with current prototypes are also given.
Modular robotics considers robots to be composed of distinct functional modules, which may be combined to perform tasks. Module combination allows functionality to be tailored to a task, but has a number of requirements. The notion of a 'module' must be clearly defined, and functionality specified in an unambiguous manner. The nature of the combination must be defined according to specific relationships between modules, and the consequences arising form combination - for modules and for the robot as a whole - need to be considered. The MARS model has been developed to allow reasoning about module combination, in terms of the ways in which modules may be combined, and the consequences that may arise form that combination. This paper defines a set of consequences which may arise during module combination.
A novel approach is presented to design an optimized robot manipulator based on the task description taking into account the workspace and the dynamic properties inherent in the system by selecting components from a library of available components. This approach requires representing a robot configuration using Denavit-Hartenberg parameters and defining the desired trajectory. A dynamic analysis package (DADS) is used to create and analyze the model automatically via a in-house developed code, which eliminates the user interaction with DADS enabling us to model any serial link manipulator instantly. The results of the analysis are used by another program to evalute a fitness value. This fitness value is then passed to the genetic algorithm, which is used as the optimization tool. Then, an iteration is established until defined convergence criteria are met. The approach has been applied in the selection of geometric characteristics for the links of different configuration robotic manipulators with the objective being to minimize the required torque based on the defined task.
The Modulator Manipulator Systems (MMS) is a general purpose user configurable modular robotic system that provides rapid and inexpensive implementation of standard or customized manipulator geometries of arbitrary complexity, tailored to the needs of individual researchers or application engineers. Structures are configured from self contained 1- DOF rotary or linear JOINT modules, which include an on- board control processor, power amplifier, DC servomotor, high precision position sensor and a fast, rigid connect/disconnect latch. The joints are connected together by passive rigid LINK tubes, that define the manipulator geometry. These components are all offered in 5, 7, 10, 14 and 20 cm diameters, with power density and positional accuracy competitive with other commercial manipulators.
This paper describes the development of small mobile robots for collaborative surveillance tasks. Each of the robots, called Millibots, has only limited sensing, computation, and communication capabilities. However, by collaborating with other robots, they can still perform useful tasks. The task that we are considering is collaborative mapping and exploration inside buildings. To guarantee accessibility through narrow passageways, the robots are very small, approximately 6 by 6 by 6 cm. This size puts severe weight and power limitations on the design of the robots. To overcome these limitations, we are developing a modular system in which modules with different sensing, computation, and communication capabilities can be combined into a compete robot that is specifically designed for a given task. By making the design modular, we can avoid carrying around capabilities that are not essential for the current task. The concept of modularity also plays an important role in the design of the robot team. Here the 'modules' are the individual robots and the design task addresses the problem of determining how many robots to use and what kind of capabilities to select on different robots such that the overall team is capable of completing its task. The paper addresses these design issues and illustrates them with the specific example of the Millibot team.
The Communications Research Laboratory has been studying the inspection technology needed for the first step of 'Orbital Maintenance System' (OMS) that maintains space system by inspecting of satellites, re-orbiting useless satellites, and simply repairing satellites in orbit. In this paper, we introduce a re-configurable modular-type manipulator for space utilization, and its control algorithm for the inspection of satellites in orbit. The manipulator system is interconnected by a joint mechanism which can be connected and disconnected by simple robotic motion and also resist inertia during space operation. The modules are also specially designed for thermal, vacuum, and radiation conditions. The control processors are qualified in a piggyback flight on 2000. We have adopted a decentralized control algorithm for the redundant manipulator, which automatically adapts to the manipulator reconfigurations.