As our robots develop greater range, autonomy, and sensor payloads, the desire to more fully interact with the environment has led researchers to the obvious integration of manipulators on their machines. This paper explores the integration of locomotion and manipulation in diverse forms, defining and then testing seven general design rules. The kinematics study reports on system level choices for integrating limbs on robot bodies, with a critique using metrics that include workspace, strength, occlusion and view factors for perception systems and the relative proportions of the robot to its work site. The design axioms were tested with three cases of radically different mobile robots, spanning the spectrum from gantries, to wheeled vehicles, to free climbers in space. In all cases, the robot design is considered as an anatomical study, comparing relative proportions of the limbs and the robot's body. For the wheeled vehicles in particular, sub categories were considered to identify the impact of multiple arms, and the interplay of manipulation and perception requirements. The study is completed with a look at actual robot prototypes built by the authors in all archetypes, and sub types. The design axioms were tested and found to be both valid and useful as governing principles in each case.
One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.
We introduce a new algorithm to cover an unknown space with a homogenous team of circular mobile robots. Our approach uses a previous single robot coverage algorithm that divides the target space into cells, each of which can be covered with simple back and forth motions. The advantage of our method is that we plan in a two dimensional configuration space for a team of n robots, bypassing the 2n dimensional configuration space. The approach used is semi-decentralized - robot teams cover the space independent of each other, but, robots within a team communicate state and share information. An implementation of the algorithm, in simulation, is also detailed.
There are many methods that address navigation and path planning for mobile robots with non-holonomic motion constraints using clever techniques and exploiting application-specific data, but it is always better not to have any such constraints at all. In this document we re-examine the capabilities of some popular driving mechanisms from a different perspective and describe a method to obtain holonomic motion using those mechanisms. The main idea is to not concentrate on the center of the driving mechanism (which is the usual choice) as the reference point for our calculations, but to select another point whose motion in the x-y plane is not constrained in any direction, and which is also a logical and useful substitute for the center. In addition to the derivation of the forward and inverse kinematics equations for the new reference point, we also explain how to further simplify the design of a controller which uses the described method to compute motion commands for the robot. In order to illustrate the ideas, we present graphs that were plotted using the actual parameter values for a synchronous-drive research robot.
A sensory mapping method, called Staggered Hierarchical Mapping (SHM), and its developmental algorithm are described in this paper. SHM is a model motivated by human early visual pathways including processing performed by the retina, Lateral Geniculate Nucleus (LGN) and the primary visual cortex. The work reported here concerns not only the design of such a series of processors but also their autonomous development. The primary goal is to address a long standing open problem of visual information processing in that processing elements that are dedicated to receptive fields of different retinal positions and different scales (sizes) must be concurrently functioning, in robotic and other applications in unstructured environments. A new Incremental Principal Component Analysis (IPCA) method is used to automatically develop orientation sensitive and other needed filters. For a fast convergence, the lateral inhibition of sensory neurons is modelled by what is called residual images. A set of staggered receptive fields models the pattern of positioning of processing cells. From sequentially sensed video frames, the proposed developing algorithm develops a hierarchy of filters, whose outputs are uncorrelated within each layer, but with increasing scale of receptive fields from low to higher layers. To study the completeness of the representation generated by the SHM, we experimentally show that the response produced at any layer is sufficient to corresponding retinal image. As an application domain, we describe out preliminary experiments of autonomous navigation by the SAIL robot, and why a mapping like the SHM is needed in our next phase of work of vision guided autonomous navigation in outdoor environments.
Reinforcement learning techniques have been successful in allowing an agent to learn a policy for achieving tasks. The overall behavior of the agent can be controlled with an appropriate reward function. However, the policy that is learned will be fixed to this reward function. If the user wishes to change his or her preference about how the task is achieved the agent must be retrained with this new reward function. We address this challenge by combining Spreading Activation Networks and Reinforcement Learning in an approach we call SAN-RL. This approach provides the agent with a causal structure, the spreading activation network, relating goals to the actions that can achieve those goals. This enables the agent to select actions relative to the goal priorities. We combine this with reinforcement learning to enable the agent to learn a policy. Together, these approaches enable the learning of a configurable behaviors, a policy that can be adapted to meet the current preferences. We compare the approach with Q-learning on a robot navigation task. We demonstrate that SAN-RL exhibits goal-directed behavior before learning, exploits the causal structure of the network to focus its search during learning and results in configurable behaviors after learning.
Writing control code for mobile robots can be a very time-consuming process. Even for apparently simple tasks, it is often difficult to specify in detail how the robot should accomplish them. Robot control code is typically full of magic numbers that must be painstakingly set for each environment that the robot must operate in. The idea of having a robot learn how to accomplish a task, rather than being told explicitly is an appealing one. It seems easier and much more intuitive for the programmer to specify what the robot should be doing, and let it learn the fine details of how to do it. In this paper, we describe JAQL, a framework for efficient learning on mobile robots, and present the results of using it to learn control policies for simple tasks.
Mobile robot navigation tasks are subject to motion stochasticity arising from the robot's local controllers, which casts the navigational task into a Markov decision process framework. The MDP may, however, be intractably large; in this work we consider the prioritized package delivery problem which yields an exponentially large state space. We demonstrate that the bulk of this state space is tied to a sub-problem that is an instance of the traveling salesdroid problem and that exponential improvements in solution time for the MDP can be achieved by addressing the TSP sub-problem separately. This process produces a suboptimal solution, but we show that the degree of suboptimality can be controlled by employing more effective TSP approximators. The key contribution is the demonstration that MDP solution techniques can substantially benefit from careful application of well-understood deterministic optimization techniques.
We present a system architecture for robust target following with a mobile robot. The system is based on tracking multiple cues in binocular stereo images using the XVision toolkit. Fusion of complementary information in the images, including texture, color and depth, combined with a fast optimized processing reduces the possibility of loosing the tracked object in a dynamic scene with several moving targets on intersecting paths. The presented system is capable of detecting objects obstructing its way as well as gaps. It supports application in more cluttered terrain, where a wheel drive of mobile robot cannot take the same path as a walking person. We describe the basic principles of the fast feature extraction and tracking in the luminance, chrominance and disparity domain. The optimized tracking algorithms compensate for illumination variations and perspective distortions as already presented in our previous publications about the XVision system.
When computing optical flow using region-based matching, several problems arise. One is that the optical flow cannot be reliably computed at most locations; particularly troublesome are those with little or no texture. Some confidence measures exist for optical flow, but they either succumb to the aperture problem, or they require that the optical flow be computed, a potentially costly computation with results discarded if unreliable. Another problem is that the normal patch dissimilarity measures do not take image sampling into account, causing the optical flow not to be found in high-contrast areas. I first give an efficient algorithm for determining image locations likely to have reliably correct optical flow. I then derive a patch dissimilarity measure insensitive to image sampling, by extending the work of Birchfield and Tomasi to the computation of two-dimensional optical flow. Finally, I show how to adapt sub-pixel optical flow estimation by quadratic fitting to this new patch dissimilarity measure.
A method for perception-based egocentric navigation of mobile robots is described. Each robot has a local short-term memory structure called the Sensory EgoSphere (SES), which is indexed by azimuth, elevation, and time. Directional sensory processing modules write information on the SES at the location corresponding to the source direction. Each robot has a partial map of its operational area that it has received a priori. The map is populated with landmarks and is not necessarily metrically accurate. Each robot is given a goal location and a route plan. The route plan is a set of via-points that are not used directly. Instead, a robot uses each point to construct a Landmark EgoSphere (LES) a circular projection of the landmarks from the map onto an EgoSphere centered at the via-point. Under normal circumstances, the LES will be mostly unaffected by slight variations in the via-point location. Thus, the route plan is transformed into a set of via-regions each described by an LES. A robot navigates by comparing the next LES in its route plan to the current contents of its SES. It heads toward the indicated landmarks until its SES matches the LES sufficiently to indicate that the robot is near the suggested via-point. The proposed method is particularly useful for enabling the exchange of robust route informa-tion between robots under low data rate communications constraints. An example of such an exchange is given.
This paper describes a mobile robot system designed to explore and map an indoor area such as is encountered in urban search and rescue mock-ups. The robot uses homogeneous artificial landmarks deployed during exploration for localization as it constructs a map, determining landmark distance and bearing with groundplane calculations from a single camera and using Kalman filtering techniques to perform localization. When implemented on a Magellan II mobile robot, the localization technique correctly localized the robot while exploring and mapping.
In autonomous mobile robot applications, local obstacle avoidance methods are frequently coupled with an initialized global map in order to achieve a complete navigation scheme. Unfortunately, global map information is often unavailable for global map initialization. In these situations, the designer must develop a navigation scheme such that the mobile robot either doesn't need a global map to achieve its goal or it builds the global map autonomously. This work presents a simple algorithm for developing a quasi-free space global map. The algorithm is based on the premise that the robot will be given multiple attempts at a particular goal. During early attempts, the mobile robot explores using solely local obstacle avoidance. While exploring, the robot records where it has been and uses this information on subsequent attempts. Further, this paper also outlines the look-ahead method by which the global map is implemented. Finally, both simulated and experimental results are presented. Testing of this algorithm was performed on an outdoor mobile robot, known as Navigator, which was developed at Virginia Tech.
In this paper, a new method of initial localization for a mobile robot only employing a laser range finder is proposed. We accomplish this by the matching of Complete Line Segments (CLS) derived from current scan of range finder and the priori-map. The definition of Complete Line Segments (CLS) and Sub-Complete Line Segments (SLS) as well as their properties and decision rules are given. We confirm that, in the angle of theory, the initial localization can be accomplished even if only one pair of CLS can be matched exactly since the CLS represent some basic but complete features of the environment. So compare with other ways, this method has simpler principle and can easily be implemented with less computational costs. The whole process of initial localization for a mobile robot is described in detail. At last promising real-world experiments involving the initial localization modules on our ATRV2 mobile robot are described.
Odometry, also referred to as dead reckoning, is one of the least expensive and most widely used methods for mobile robot localization. However, mobile robots implementing dead reckoning are plagued with inaccuracy caused by systematic and non-systematic errors. In many cases, the most dominant source of inaccuracy is systematic errors. Systematic errors are caused by differences between the nominal and the actual dimensions of vehicle parameters (such as wheel radius and wheelbase measurements). Because systematic errors are inherent to the vehicle, the dead reckoning inaccuracy grows unbounded. Fortunately, it is possible to largely eliminate systematic errors by calibrating the parameters such that the differences between the nominal dimensions and the actual dimensions are minimized. This work presents a method for calibration of mobile robot parameters using an optimization engine. A cost function is developed based on the UMBmark (University of Michigan Benchmark) test pattern. This method is presented as a simple time efficient calibration tool for use during startup procedures of a differentially driven mobile robot. Comparisons are made between this method and an analytical calibration method developed at the University of Michigan. Results show that this tool consistently gives greater than 50% improvement in overall dead reckoning accuracy on an outdoor mobile robot, with respect to itself prior to calibration.
Almost all robot navigation systems work indoors. Outdoor robot navigation systems offer the potential for new application areas. The biggest single obstacle to building effective robot navigation systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation, limiting their use to prepared areas that are relative free of natural or man-made interference sources. The hybrid tracker combines rate gyros and accelerometers with compass and tilt orientation sensor and DGPS system. Sensor distortions, delays and drift required compensation to achieve good results. The measurements from sensors are fused together to compensate for each other's limitations. Analysis and experimental results demonstrate the system effectiveness. The paper presents a field experiment for a low-cost strapdown-IMU (Inertial Measurement Unit)/DGPS combination, with data processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost ISA (Inertial Sensor Assembly) and because of the relatively small area of the trajectory. The scope of this experiment was to test the feasibility of an integrated DGPS/IMU system of this type and to develop a field evaluation procedure for such a combination.
Present work describes an approximation to obtain the best estimation of the position of the outdoor robot ROJO, a low cost lawnmower to perform unmanned precision agriculture task such are the spraying of pesticides in horticulture. For continuous location of ROJO, two redundant sensors have been installed onboard: a DGPS submetric precision model and an odometric system. DGPS system will allow an absolute positioning of the vehicle in the field, but GPS failures in the reception of the signals due to obstacles and electrical and meteorological disturbance, lead us to the integration of the odometric system. Thus, a robust odometer based upon magnetic strip sensors has been designed and integrated in the vehicle. These sensors continuosly deliver the position of the vehicle relative to its initial position, complementing the DGPS blindness periods. They give an approximated location of the vehicle in the field that can be in turn conveniently updated and corrected by the DGPS. Thus, to provided the best estimation, a fusion algorithm has been proposed and proved, wherein the best estimation is calculated as the maximum value of the join probability function obtained from both position estimation of the onboard sensors. Some results are presented to show the performance of the proposed sensor fusion technique.
For the fast and accurate self-localization of mobile robots, landmarks can be used very efficiently in the complex workspace. In this paper, we propose a simple color landmark model for self-localization and a fast landmark detection and tracking algorithm based on the proposed landmark model. We develop a color landmark with a symmetric and repetitive structure, which shows invariant color histogram characteristics under some geometric distortions. Detection and tracking of the model are accomplished by a factored sampling technique in which color similarity is estimated by the color histogram intersection. We also use the color similarity to update the color histogram model of the landmark model for robust tracking under illumination change. We demonstrate the feasibility of the proposed technique through experiments in cluttered indoor environments. Experimental results show that proposed landmark is enough to be used in cluttered environment and proposed detects and tracks the landmark in cluttered scene in near real-time robustly.
This paper presents a time-varying feedback controller for stabilization of tracked mobile robot. Considering its nonholonomic property, we first transform the TMR kinematics into a advantageous model. Then the principle of exponential stabilization is discussed using a Lyapunov-type argument. The back-stepping technique is applied to obtain a global exponential regulator for the TMR model and simulation results are given.
This paper addresses the tracking problem of tracked mobile robot. After the kinematic model of the nonholonomic system has been discussed, a robust motion controller based on backstepping technique is proposed for the TMR. Some singular perturbations, which maybe occur in practical situations, are also taken into account in order to ensure the tracking error of the closed-loop system to converge toward zero. Simulation results are provided to validate that the proposed controller ensure the TMR asymptotically track the desire trajectory.
At present, spherical tank manufacture is still staying at the level of manual welding or semi-automation. In order to improve quality of weld seam and guarantee safe operating of spherical tank, automatic welding equipment is needed urgently. A intelligent wheeled mobile robot equipped with CCD based visual sensor has been developed to acquire better weld quality in this research. Special mechanical structure has been proposed based a wheeled mobile robot body to realize reliably and flexibly absorbing and moving on the surface of spherical tank. A 3-DOF welding manipulator has been fixed on the robot to carry out welding tasks. A CCD sensor has been used to detect weld seam for the trajectory planing of both the mobile robot and the welding torch, control strategy for nonholonomic system with redundant DOF has been put forward to realize the accurate tracing of weld torch, an intelligent controller has been designed. In this paper, mechanical structure of robot, principle of CCD sensor, tracing model for robot and welding torch, and intelligent controller have been presented in details respectively. Experiments show that this robot can fulfill all-position welding tasks freely on the surface of tank with high weld torch tracing accuracy(up to +/- 0.5mm).
A new navigation system is described for a mobile robot moving around in man-made environments such as hallways in a building. The system is based on a commercial three-wheel mobile platform with the addition of a Linux-based laptop computer, a Radio Frequency Identification (RDID) tag sensor and a vision system. At critical junctions such as the intersection of two passages the navigation system must identify the robot's location on a given map. We propose a method using RFID tags as landmarks. Each RFID tag has a unique ID number corresponding to its location on the map. The navigation system can decide the next movement (left-turn, right-turn and so on) toward a given goal based on this number. The navigation system also can automatically follow walls using the vision system. Since the equipment setup is very simple and the navigation system is easily combined with general mobile robot systems, our proposed technique would be useful for real-world robotic applications such as intelligent navigation for motorized wheelchairs.
Manned missions to other planetary bodies will rely heavily on robotics and automation to enhance the operational safety and capabilities of the crew. In particular, the movement and sensing capabilities of humans in spacesuits are severely constrained. Thus, an important class of robot will be those that accompany humans during extra-vehicular activity (EVA) and provide assistance -- tool transport, video documentation, sample collection, etc. In 1999, NASA engaged in a set of field tests in California called ASRO (AStronaut-ROver), in which a space-suited test subject collaborated with the tele-operated Marsokhod mobile robot, controlled by scientist at a remote location. From the lessons learned in the ASRO tests, the EVA Robotic Assistant project was started at NASA's Johnson Space Center to provide a testbed for continued research in astronaut-robot interaction and cooperation. In September 2000, NASA conducted two weeks of field tests in Arizona at three planetary surface analog sites. Three scenarios were tested requiring cooperation between a space-suited astronaut and the autonomous EVA Robotic Assistant: Power Cable Deployment, Solar Panel Deployment, and Geologist's Assistant. In this paper, we describe the ERA project in detail, and report on results from the Arizona field tests.
This paper represents a novel smooth trajectory planning algorithm which uses the natural behavior model of a mobile robot (MR). The shortest path in the free configuration space is obtained by using the visibility graph method. It is modified according to dynamic constraints which are implicitly included in the natural behavior of the mobile robot. The modified path becomes a smooth, easily tractable time-optimal trajectory. For every points of it, translating/steering velocities and accelerations and reaching times are known. It is applicable to the real-time dynamic configuration spaces, because of simplicity and low computational time.
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.