This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.
In presence of obstacles of arbitrary forms, the existent methods for shortest path planning need to analyze all classes of topologically different paths, which is time-consuming. In the present paper, we propose the potential diffusion network for path planning in presence of obstacles of arbitrary forms which avoids different topological paths analysis. The starting and goal points of a mobile robot are considered as diffusion sources separately. For each diffusing source, we calculate the potential field by use of a diffusing network which converges to a stable state representing the potential field. The sum of the two fields corresponding respectively to the sources at starting and goal points gives the composed potential distribution in the space. The optimal path is shown to be the iso-potential line of the field, which begins from the starting point and terminates at the goal point. Scenes in robotics being in general represented by discrete arrays of voxels, additional processing for discrete space case is presented also. Our method is implemented and tested for complicated scenes of obstacles of arbitrary forms to find the optimal path, the experimental results are satisfactory.
We examine the potential role of probabilistic analysis in the integration of sensor based trajectory planning and motion/structure estimation. We report in this article three formalisms illustrating this approach.
The Goal Oriented Recursive Path-Planning (GORP) algorithm described in this paper is a sensor based technique for navigating a holonomic mobile robot in an unknown environment, cluttered with convex and simple non-convex objects. This method tries to find the longest straight path segment with predetermined clearance from the obstacles in the direction that takes the robot closer to the goal or leads to get around the obstacle. Unlike other methods which try to be as far from the obstacles as possible, this method keeps only a safe distance from the obstacle in the way. This results in a shorter path which reduces the chance of confronting a new obstacle while keeping the path found safe enough. GORP guarantees to find a path in an open area and in a convex region cluttered with convex and simple non- convex objects if one exists.
In real-world situations multiple vehicles will be called upon to perform exploration, national defense, rescue and routine operations in hazardous environments. In multi-vehicle missions, the benefits autonomy has to offer are even greater. Multi-vehicle control and monitoring with autonomous operation will become an absolute necessity and autonomous cooperation of vehicles is the only practical alternative. Developing enabling technologies, such as autonomous control systems, can provide UAVs with the ability to fuse sensor data, cope with conflicting inputs, and make decisions. The success of UAV missions flying straight paths between way points can be enhanced by an on-board trainable adaptable replanning system that can route the UAV around unexpected threats or detour for unusual observations. This paper investigates different levels of autonomy for multiple unmanned vehicles. It is shown that different types of autonomous cooperation will be needed for different types of missions.
Robots pursuing complex goals must plan paths according to several criteria of quality, including shortness, safety, speed and planning time. Many sources and kinds of knowledge, such as maps, procedures and perception, may be available or required. Both the quality criteria and sources of knowledge may vary widely over time, and in general they will interact. One approach to address this problem is to express all criteria and goals numerically in a single weighted graph, and then to search this graph to determine a path. Since this is problematic with symbolic or uncertain data and interacting criteria, we propose that what is needed instead is an integration of many kinds of planning capabilities. We describe a hybrid approach to integration, based on experiments with building simulated mobile robots using Soar, an integrated problem-solving and learning system. For flexibility, we have implemented a combination of internal planning, reactive capabilities and specialized tools. We illustrate how these components can complement each other's limitations and produce plans which integrate geometric and task knowledge.
Detecting unexpected obstacles and avoiding collisions is an important task for any autonomous mobile robot. In this paper we describe an approach using a sonar-based system that we have used in our indoor autonomous mobile system. The logical design of this system is shown, followed by a description of how it builds a knowledge of the environment. The information collected of the environment can be used for many applications like real-time obstacle avoidance, environment learning, position estimation. This method builds up two kind of maps: a occupancy grid which contains the probability value of each cell to be occupied and a orientation map which contains the expected orientation of the surface of each cell in the occupancy grid. Methods for filtering raw sensor data before using it for map generation together with experimental results are shown.
A rapid sorting technique with a wide range of applications has been developed and performance tested. The applications include path planning for complex manipulators with many degrees of freedom and autonomous under water vehicles. This paper describes the results and trade-offs behind the design and implementation phases during hardware and software development. The presented sorting technique is particularly useful as a collision checking, or conversely homing, subroutine in path planning in complex and dynamic environments.
In a mobile robotic system, complexities in positioning arise due to the motion. An adaptive position estimation scheme has been developed for an automated guide vehicle (AGV) to overcome these complexities. The scheme's purpose is to minimize the position error--the difference between the estimated position and the actual position. The method to achieve this is to adapt the system model by incorporating a parameter vector and using a maximum likelihood algorithm to estimate the parameters after an accurate position determination is made. A simulation of the vehicle's guidance system was developed and the estimator tested on an oval-shaped path. Upon injecting biases into the system, initial position errors were 10 centimeters or more. After the estimator converged, the maximum final errors were on the order of 1 to 2 centimeters (prior to measurement update). After each measurement update, after the estimator had converged, errors were on the order of 1 to 2 millimeters.
Given three landmarks whose location in Cartesian space is known, and a robot with the ability to detect each landmark's heading with respect to the robot's internal orientation, the task is to determine the robot's location and orientation in Cartesian space. Although methods are mentioned in literature, none are completely detailed to the point where useful algorithms to compute the solution are given. This paper presents four methods: (1) Iterative search, (2) Geometric circle intersection, (3) Geometric triangulation, and (4) Newton-Raphson iterative method. All four methods are presented in detail and compared in terms of robustness and computation time. For example, circle intersection fails when all three landmarks lie on a circle, while the Newton-Raphson method fails when the initial guess of the robot's position and orientation is beyond a certain bound. The shortcomings and strengths of each method are discussed. A sensitivity analysis is performed on each method, with noise added to the landmark locations. The authors were involved in a mobile robotics project that demanded a robust three landmark triangulation algorithm. None of the above methods alone was adequate, but an intelligent combination of each method served to overcome their individual weaknesses. The implementation of this overall absolute positioning system is detailed.
This work describes the implementation of some of the neural systems that will enable a mobile robot to actively explore and learn its environment visually. These systems perform the real-time extraction of robust visual features, the segmentation of landmarks from the background and from each other using binocular attentional mechanisms, the predictive binocular tracking of landmarks, and the learning and recognition of landmarks from their features. Also described are preliminary results of incorporating most of these systems into a mobile robot called MAVIN, which can demonstrate the visual exploration of simplified landmarks. Finally, we discuss plans for using similar neural strategies to extend MAVIN's capabilities by implementing a biologically plausible system for navigating through an environment that has been learned by exploration. This explorational learning consists of quantizing the environment into orientation-specific place fields generated by the view-based spatial distribution of landmarks, and associating these place fields in order to form qualitative, behavioral, spatial maps.
The capability to recognize the operating context and to assess the situation in real-time is needed, if a high functionality autonomous mobile robot has to react properly and effectively to continuously changing situations and events, either external or internal, while the robot is performing its assigned tasks. A new approach and architecture for context recognition and situation assessment module (CORSA) is presented in this paper. CORSA is a multi-level information processing module which consists of adaptive decision and classification algorithms. It performs dynamic mapping from the data space to the context space, and dynamically decides on the context class. Learning mechanism is employed to update the decision variables so as to minimize the probability of misclassification. CORSA is embedded within the Mission Manager module of the intelligent autonomous hyper-controller (IAHC) of the mobile robot. The information regarding operating context, events and situation is then communicated to other modules of the IAHC where it is used to: (a) select the appropriate action strategy; (b) support the processes to arbitration and conflict resolution between reflexive behaviors and reasoning-driven behaviors; (c) predict future events and situations; and (d) determine criteria and priorities for planning, replanning, and decision making.
We present in this paper a fast method to determine the position and orientation of a mobile robot. An external vision system covering the work area is used to localize the robot. A known pattern composed of coplanar points fixed on the robot is extracted from a single image and used to locate the robot in 3D. Assuming that the robot's movements are locally planar, the extracted points are projected on the pattern's plane, estimated from the previous localization. An iterative loop to match pattern points and to localize the pattern is then performed. From four identified points, a 3D localization is possible.
For the vision system of an autonomous land vehicle (ALV), the primary task is to provide a description of the world rich enough to facilitate such behaviors as road-following and obstacle avoidance, etc. In this paper, a new approach based on the pyramid data structure (PDS) is proposed for describing the environment of the world. On the basis of PDS, color image and range image are projected into a kind of grid map. Finally, we developed a method using the idea of data fusion of get a final description of the terrain.
The aim of the MITHRA project is the design of an autonomous mobile robot for survey and teleoperating. This project groups several european partners. The role of LIFIA was to design the man machine interface and the navigation control system. The architecture of such system can be described in terms of layers of control, from effectors and sensors level to the symbolic level. At this level there are two modules which communicate with each other: the supervisor and the Man Machine Interface (MMI). This paper describes these two modules. The MMI allows a human operator to describe the robot environment, describe the missions and monitor their execution. For this novice operator, we defined a mission specification language which contains, in a very simple formalism, all the concepts needed for a description of a mission such as time, resources and tactical constraints. The supervisor module plans the mission and during this phase the operator can interact with the supervisor to modify the mission and simulate the execution of the mission. During the execution, the supervisor monitors the constraints and can decide to replan the mission in case of any constraint violation or execution failure. The operator doesn't necessarily step in but he can stop the execution at any time to add or retract a task or teleoperate the robot. Another functionality of the MMI is to replay the executed mission with his historic file. We will discuss the model chosen for the MMI and the role of the supervisor in the dialog between the operator and the remote system. Finally, we will show examples of the working system.
A data display system for embedded realtime systems has been developed for use as an operator's user interface and debugging tool. The motivation for development of the On-Line Data Display (ODD) have come from several sources. In particular the design reflects the needs of researchers developing an experimental mobile robot within our laboratory. A proliferation of specialized user interfaces revealed a need for a flexible communications and graphical data display system. At the same time the system had to be readily extensible for arbitrary graphical display formats which would be required for data visualization needs of the researchers. The system defines a communication protocol transmitting 'datagrams' between tasks executing on the realtime system and virtual devices displaying the data in a meaningful way on a graphical workstation. The communication protocol multiplexes logical channels on a single data stream. The current implementation consists of a server for the Harmony realtime operating system and an application written for the Macintosh computer. Flexibility requirements resulted in a highly modular server design, and a layered modular object- oriented design for the Macintosh part of the system. Users assign data types to specific channels at run time. Then devices are instantiated by the user and connected to channels to receive datagrams. The current suite of device types do not provide enough functionality for most users' specialized needs. Instead the system design allows the creation of new device types with modest programming effort. The protocol, design and use of the system are discussed.
New generations of military unmanned systems on the ground, at sea, and in the air will be driven by man-portable command units. In past efforts we implemented several prototypes of such units which provided display and capture of up to four video input channels, provided 4 color LCD screens and a larger status display LCD screen, provided drive input through two joysticks, and, through software, supported a flexible 'virtual' driver's interface. We have also performed additional trade analysis of prototype systems incorporating force feedback and extensive image-oriented processing facilities applied to man-controlled robotic control systems. This prior work has resulted in a database of practical design guidelines and a new generation of hardened compact robotic command center which is being designed and built to provide more advanced video capture, display, and interfacing features, supercomputer level computational performance, and ergonomic features for hard field use. In this paper we will summarize some past work and will project current performance to features likely to be common across most unmanned systems command, control, and communications subsystems of the near future.
Pose estimation, or self-location, is a fundamental requirement for a mobile robot which enables it to navigate along a planned path or to position itself within its environment. This paper compares two 2D pose estimation algorithms: one is feature-based, and the other is iconic. Both techniques match range data to a map to calculate x, y, and (theta) (the pose) of the range sensor within its 2D environment. The metrics for the comparison include accuracy, processing time, number of range data points, robustness to errors in the map, sensitivity to the initial rough pose estimate, and environmental models. The feature-based approach is better with respect to processing time and the initial estimate; the iconic method has the advantage with respect to accuracy, the number of data points, and the environment model. For the test environment used in this paper, both methods are robust to errors in the map. A hybrid approach is proposed which combines the two estimators in a way that exploits the advantages of both.
Teaching students concepts about mobile and stationary robotics is difficult because there are few environments available for elegant and robust demonstrations. To aid student's understanding, robotic concepts need to be demonstrated not only in the lab but in the classroom. Since it is usually difficult to bring the equipment from the lab to the classroom, an environment for controlling and viewing the robots from the classroom is presented. The environment has several features: control from a distance, visually monitoring of actions, and feedback from different levels of system. A complete environment incorporating these features for teleoperated control of mobile robots and robot arms is detailed.
Crusteans have been engineered by evolution to adapt and survive in a shallow water hydro- dynamic environment. We have formed a partnership between biology and engineering to explore how crusteans navigate in shallow waters and we are continuing studies to reverse- engineer the biological system. In particular, we are studying the neural control mechanisms of the American lobster, and applying the resultant neurophysiological models to robot control. Our robot is based on a central pattern generation model rather than a reflex chain model of the underlying mechanisms. We have formalized the basic structure of this model in terms of central pattern generators, command, and coordinating systems. The central pattern generator controls the motion of the leg appropriate to walking in all directions with and without sensory feedback. The coordinating system controls the gait pattern of the legs. Proprioceptive reflexes are used to alter the path of a leg due to obstacles and exteroceptive reflexes alter the patterns of several legs to achieve overall motions of the lobster. We have built a simulator which models the central pattern generator and coordination and load compensation. It demonstrates basic walking motions in any direction and at multiple speeds. If additional loads are applied while walking, the neural activity is increased (recruitment) and the leg pattern slows to reflect the additional load. We are currently building an eight legged terrestrial walking machine based on the morphology of the lobster to demonstrate the results of our simulation. In the future we will add more sensors, reflexes, and taxis to our simulator and robot. We plan to implement an underwater version of this robot.
To be useful in the real world, robots need to be able to move safely in unstructured environments and achieve their given tasks despite unexpected environmental changes or failures of some of their sensors. The variability of the world makes it impractical to develop very detailed plans of actions before execution since the world might change before execution begins and thus invalidate the plan. We propose to generate the very detailed plan of actions needed to control a robot at execution time. This allows to obtain up-to-date information about the environment through sensors and to use it in deciding how to achieve the task. Our theory is based on the premise that proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of execution. In our approach we produce the detailed plan of primitive actions and execute it by using an object-oriented approach, in which primitive components contain domain specific knowledge and knowledge about the available sensors and actuators. These primitives perform signal and control processing as well as serve as an interface to high-level planning processes. This paper addresses the issue of what knowledge needs to be available about sensors, actuators and processes in order to be able to integrate their usage, and control them during execution. The proposed methods of execution works for any sensor/actuator existing on the robot when given such knowledge.
The rapid evolution of micromechanical fabrication techniques and other sensor, effector, and processing technologies will soon make it possible to employ large numbers of very inexpensive autonomous mobile robots with fairly limited sensor capabilities to perform real- world missions in the ground, air, space, and underwater environments. One approach to such a system is to realize desired emergent collective group behaviors with simple sensor-based reactive planners. The initial thrust of this effort has been to develop generic ensemble behaviors, such as blanket, barrier, and sweep coverage, and various deployment and recovery modes, which can address a broad spectrum of generic applications, both military and civilian. However, while different applications may require similar group behaviors, the sensor, information, and communications resources available to the participating individual robots may be very different. This paper outlines the many-robot approach to real-world problem solving and discusses the various roles that different types of sensors can play in such systems. Analysis and simulation results are presented to show how useful behavioral algorithms can be designed to make use of diverse information resources, and the area search problem is analyzed to derive both system measures of effectiveness and system design considerations.
In this paper, we describe a real-time navigation assistance system dedicated to teleoperated vehicles for indoor applications. In order to obtain real-time reaction and motion continuity (no ''perception/move/step'' cycle), the system is decomposed in three levels working in parallel, with short cycle times. The lowest level, the pilot, continuously computes tangential and rotational speeds for the locomotion system, in order to follow a particular subgoal. This subgoal is dynamically generated by the second level, the subgoal generator, in order to avoid local obstacles, and tracking milestones positions (x,y,th). The highest level, the geometrical path planner, computes an optimal path composed of milestones to be roughly followed by the vehicle. In the first part of the paper, we present a global view of the navigation assistance system, its decomposition in three levels and its embedding in the overall system. In the second part, we give a more detailed description of each level, and we terminate with a presentation of results obtained in a real environments.
MAIA (acronym for Modello Avanzato di Intelligenza Artificiale) is a project aimed at the integration of several AI resources being presently developed at IRST. The overall approach to the design of intelligent artificial systems that MAIA proposes is experimental not less than theoretical, and an experimental setup (that we call the experimental platform) has consequently been defined in which a variety of mutually interacting functionalities can be tested in a common framework. The experimental platform of MAIA consists of 'brains' and 'tentacles', and it is to one of such tentacles--the Mobile Robot--that the present paper is devoted to. At present, the mobile robot is equipped with two main modules: the Navigation Module, which gives the robot the capability of moving autonomously in an indoor environment, and the Speech Module, which allows the robot to communicate with humans by voice. Here the overall architecture of the system is described in detail and potentialities arising from the combined use of speech and navigation are considered.
SAL (the SmartyCat Agent Language) is a language being developed for programming SmartyCat, our mobile robot. SmartyCat's underlying software architecture is agent-based. At the lowest level, the robot sensors and actuators are controlled by agents (viz., the sensing and acting agents, respectively). SAL provides the constructs for organizing these agents into many structures. In particular, SAL supports the subsumption architecture approach. At higher levels of abstraction, SAL can be used for writing programs based on Minsky's Society of Mind paradigm. Structurally, a SAL program is a graph, where the nodes are software modules called agents, and the arcs represent abstract communication links between agents. In SAL, an agent is a CLOS object with input and output ports. Input ports are used for presenting data from the outside world (i.e., other agents) to the agent. Data are presented to the outside world by the agent through its output ports. The main body of the SAL code for the agent specifies the computation or the action performed by the agent. This paper describes how SAL is being used for implementing the agent-based SmartyCat software architecture on a Cybermotion K2A platform.
A new algorithm for the estimation of the minimum cost path between a pair of points in the 3-D space and it''s proposed VLSI implementation using a 3-D Cellular Automata (CA) architecture, are being presented in this paper. The proposed algorithm guarantees to find the minimum cost path in 3-D space, if such a path exists. The proposed algorithm is especially suitable for real-time 3-D applications, such as 3-D automated navigation, target tracking in 3- D, 3-D path planning, etc.
A goal of the Surrogate Semi-Autonomous Vehicle (SSV) program is to have multiple vehicles navigate autonomously as well as cooperatively with other vehicles. In this paper, we address the steps to develop the vehicle hardware. The vehicle selected is the military High Mobility Multipurpose Wheeled Vehicle (HMMWV) and the image bus selected can handle greater than 30 MB per second of image data. The laser scanner we will be using will have a look-ahead distance of greater than 30 m and we have started work on the vibration isolation system and ways to correct image sensor position instability.
The use of robotics or unmanned systems offers significant benefits to the military user by enhancing mobility, logistics, material handling, command and control, reconnaissance, and protection. The evaluation and selection process for the procurement of an unmanned robotic system involves comparison of performance and physical characteristics such as operating environment, application, payloads and performance criteria. Testing an unmanned system for operation in an unstructured environment using emerging technologies, which have not yet been fully tested, presents unique challenges for the testing community. Standard metrics, test procedures, terminologies, and methodologies simplify comparison of different systems. A procedure was developed to standardize the test and evaluation process for UGVs. This procedure breaks the UGV into three components: the platform, the payload, and the command and control link. Standardized metrics were developed for these components which permit unbiased comparison of different systems. The development of these metrics and their application will be presented.
This paper presents a model and sensor based path planning, navigation and control system for an autonomous mobile robot (AMR) operating in a known laboratory environment. The goal of this research is to enable the AMR to use its on-board computer vision to: (a) locate its own position in the laboratory environment; (b) plan its path; (c) and navigate itself along the planned path. To determine the position and the orientation of the AMR before and during navigation, the vision system has to first recognize and locate the known landmarks, such as doors and columns, in the laboratory. Once the AMR relates its own position and orientation with the world environment, it is able to plan a path to reach a certain prescribed destination. In order to achieve on-line visual feedback, an autonomous target (landmark) acquisition, recognition, and tracking scheme is used. The AMR system is designed and developed to support flexible manufacturing in general, and surveillance and transporting materials in a hazardous environment as well as an autonomous space robotics project funded by MRCO and the Canadian Space Program related to the Freedom Space Station.
In this paper we propose a monitoring system for the observation and navigation of mobile vehicles in factory environments. After a short overview about the complete flexible transport system, the monitoring system, consisting of a global sensor system with an integrated logical reasoning control module, is described in more detail. The aim of the global monitoring is the seamless observation of the complete factory environment to detect all dynamic changes. This global concept offers many advantages: the on-carrier sensors can be kept simple and cheap, driving paths can be early modified by the navigators and the actual detected carrier positions can be used for control purposes. The proposed system has a multi layer architecture consisting of the sensor layer, the low-level and high-level control layer. The sensor layer is responsible for the acquisition of environment data and low-level data analysis. The task of the low-level control layer is the improvement of the analysis data, using as much as possible a priori information. The high-level control layer executes the coordination of different observation tasks, using control rules which are activated by the results of the low-level control layer.
Using the Robot Engine framework Denning has developed four new products in four distinctly different markets in less than three years. The Robot Engine concept reduced development time by more than half and assured a better chance of success in developing these new products. Similar to the personal computer industry, the mobile robotic industry has the potential to make it possible for a number of independent payload developers to design and sell useful devices compatible with the navigation system by utilizing the Robot Engine concept. This paper will review the basic modular mechanical, hardware, and software components, and the basic integration challenges for rapid prototyping of robotic products. Human interface, vehicle control, navigation, and sensory data fusion/arbitration will be discussed within this framework.
In this paper the different mobile robot navigation systems developed by Denning are presented. They include the Beacon Navigation System (BNS), the Ultrasonic obstacle avoidance and navigation system and the Laser Angular Navigation System (Lasernav). The infrared Beacon Navigation System was designed for navigation in long corridors (up to 500 ft.), docking procedures and position calibrations. The ultrasonic array sensors provide range information on the environment. The ultrasonic ranging system was designed as a navigation device based on natural landmarks and as a navigation aid when avoiding obstacles. The scanning laser device is a real-time embedded sensor for measuring precisely the position (+/- 1 inch) of mobile robots in large open areas (30,000 square feet). This system uses a scanning laser beam in combination with wall mounted targets to compute the position of the robot.
A locomotion paradigm called 'sign pattern-based stereotyped motion' is described in this paper. It urges that the motion control patterns of the robot can be limited in to six primitive ones: Moving-Along, Moving-Toward, Moving-for-Sighting, Following-a-Person, Moving- through-Gate, Moving-along-Wall, and a locomotion from the starting point to the goal can be controlled by a sequence of these patterns. This paradigm is implemented in guide dog robot Harunobu-5, and tested in outdoor scene.
In this paper we present the method of camera space manipulation for control of a mobile cart with an on-board robot. The objective is to do three dimensional object placement. The robot- cart system is operated as a forklift. The cart has a rear wheel for steering and driving, two front wheels, and a tether allowing control from a remote computer. Two remotely placed CCTV cameras provide images for use by the control system. The method is illustrated experimentally by a box stacking task. None of the components-cameras, robot-cart, or target box are prepositioned. 'Ring cues' are placed on both boxes in order to simplify the image processing. A sequential estimation scheme solves the placement problem. This scheme produces the control necessary to place the image of the grasped box at the relevant target image position in each of the two dimensional camera planes. This results in a precise and robust manipulation strategy.
This paper presents the design and development of a partially intelligent mobile platform. The mobile robot derives its intelligence from sensors such as ultrasonic range finders, a mouse and a set of microswitches. Motion is assumed to bee on a plane two dimensional surface. The platform consists of two front wheels and two rear wheels. While the front wheels are undriven, the rear wheels are individually driven by two motors to realize both forward movement as well as steering. Ultrasonic range finders have been employed to plan an obstacle free path. A mouse like arrangement is fitted in the platform's underside, with the mouse ball in contact with the surface of travel, to help the host to keep track of the platform's current position. A single chip microcontroller based card has been designed for communication between the platform and the host IBM PC. This is part of the work carried out at the department of Electronics, S.J. College of Engineering, Mysore, India.
To overcome some of the problems associated with the use of ultrasonic sensors for navigation purposes, we propose a measurement system composed of three ultrasonic sensors, one transmitting and three receiving, placed on a moving vehicle. By triangulation this tri-aural sensor is able to determine the position, both distance and bearing, of the objects in the field of view. In this paper, we derive a statistical test which combines consecutive sightings by the moving sensor, of the same object to determine whether it is an edge, a plane or a corner. This test is formulated as a sequential test which guarantees that the object will be recognized after the minimal number of measurements given predetermined error probabilities. We include experimental data showing the object recognition capabilities of the system.
This paper describes a mobile robot with acoustic- and vision-based intrusion detection capability. A difficult task for the robot is to distinguish between observed angular velocity of stationary versus moving objects. Techniques for extracting motion cues relevant to this task given angle-only object position measurements are given. Effectively, the motion cues compare perceived velocity (direction and magnitude) and angular momentum with that expected of stationary objects.
Two processing methods using range data are shown to perform different navigation tasks for a mobile vehicle. The first, a low-level processing method based on mathematical morphology, computes in real time, the free space and is used for collision avoidance. A parallel between this method and polar histogram techniques is drawn. The second method, based on a hierarchical segmentation technique, can extract a multiple resolution description of the range data produced by the sensor. This segmentation is used to describe the immediate environment of the vehicle using simple geometrical primitives, refine the vehicle position estimate, and create a detailed representation of the immediate environment of the vehicle. Experimental results using the BIRIS range sensor are shown.
The realization of a small scale optical radar as a 3D range imaging sensor for auto-vehicle guidance, based on a dual-wave-length phase comparison range finder, is described in this paper. The range accuracy of 40 mm can be achieved within the operation range of 20 meters, utilizing the commercially available optoelectronic elements.
Computer vision technology that automatically steers a passenger vehicle to keep it in its lane on a limited-access freeway is described. This system, LaneTrak, uses a forward looking monochrome CCD camera and a real-time image processing system on-board the vehicle. The image processing system identifies the lane boundaries based on lane markers and road edges. A pair of Kalman filters uses this information combined with vehicle dynamics sensor data to determine the vehicle angular heading error, lateral offset from the center of the road, and the lane curvature ahead. A preview proportional-integral control system determines the desired steering angle to keep the vehicle in the lane. The steering angle is then passed to another controller which activates an electric steering mechanism that positions the front wheels of the vehicle. We verified the full operation of LaneTrak with computer simulation before the system was designed and used a driving simulator to evaluate drivers' reaction to system failure. LaneTrak has been successfully demonstrated on a test track and on an unopened section of a limited-access freeway at highway speeds.
For ten years, unstructured road following has been the subject of many studies. Road following must support the automatic navigation, at reasonable speed, of mobile robots on irregular paths and roads, with unhomogeneous surfaces and under variable lighting conditions. Civil and military applications of this technology include transportation, logistics, security and engineering. The definition of our lane following system requires an evaluation of the existing technologies. Although the various operational systems converge on a color perception and a region segmentation optimizing discrimination and stability respectively, the treatments and performances vary. In this paper, the robustness of four operational systems and two connected techniques are compared according to common evaluation criteria. We identify typical situations which constitute a basis for the realization of an image database. We describe the process of experimentation conceived for the comparative analysis of performances. The analytical results are useful in order to infer a few optimal combinations of techniques driven by the situations, and to define the present limits of the color perception's validity.
The LaneLok function estimates the location of lane boundaries by a least squares approximation. This estimation can be affected by vehicles close to the lane markers in adjacent lanes. A hybrid method which combines the least squares approximation with median filtering for improving the estimation was developed and tested successfully on real-highway scenes.
In a homing problem a robot is required to go to a fixed target location. In the version of the homing problem presented here, the robot uses discrepancies between landmark bearings at the current location and the target location to calculate appropriate movements. Landmarks from the current frame that have been incorrectly identified in the target frame, however, can mislead the robot, perhaps causing it to move away from the target location. This paper describes a two-frame algorithm that can detect and remove those incorrectly-identified landmarks that are inconsistent with the actual direction of the target location. This consistency-filtering algorithm is guaranteed to work as long as more than two-thirds of the landmarks are correctly identified. It is shown that no other two-frame algorithm works any better than this consistency-filtering algorithm.
This paper describes the design and implementation of an integrated system for combining obstacle avoidance, path planning, landmark detection and position triangulation. Such an integrated system allows the robot to move from place to place in an environment, avoiding obstacles and planning its way out of traps, while maintaining its position and orientation using distinctive landmarks. The task the robot performs is to search a 22 m X 22 m arena for 10 distinctive objects, visiting each object in turn. This same task was recently performed by a dozen different robots at a competition in which the robot described in this paper finished first.
In this paper we propose and demonstrate a simple method for navigation in a large unstructured environment that contains featureless objects, using 'isolated' landmarks in the navigator's view. The map-maker and the navigator are implemented using an IBM 7575 SCARA robot arm, PIPE (Pipelined Image Processing Engine), and two cameras. The navigational environment consists of a flat plane with spherical objects populated randomly on it. First, the map-maker model observes the environment, and given a starting position and a goal position, it generates a 'custom map' that describes how to get from the starting position to the goal position. The accuracy and the efficiency of the directional instructions are then demonstrated by the navigator by following the commands in the custom map. This is a first step towards our eventual goal, which is to develop a full set of vocabulary that can qualitatively describe the navigational environment to guide the navigator with efficiency and accuracy.
Autonomous cross-country navigation is essential for outdoor robots moving about in unstructured environments. Most existing systems use range sensors to determine the shape of the terrain, plan a trajectory that avoids obstacles, and then drive the trajectory. Performance has been limited by the range and accuracy of sensors, insufficient vehicle-terrain interaction models, and the availability of high-speed computers. As these elements improve, higher- speed navigation on rougher terrain becomes possible. We have developed a software system for autonomous navigation that provides for greater capability. The perception system supports a large braking distance by fusing multiple range images to build a map of the terrain in front of the vehicle. The system identifies range shadows and interpolates undersamples regions to account for rough terrain effects. The motion planner reduces computational complexity by investigating a minimum number of trajectories. Speeds along the trajectory are set to provide for dynamic stability. The entire system was tested in simulation, and a subset of the capability was demonstrated on a real vehicle. Results to date include a continuous 5.1 kilometer run across moderate terrain with obstacles. This paper begins with the applications, prior work, limitations, and current paradigms for autonomous cross-country navigation, and then describes our contribution to the area.
Simulation is used in robot navigation for testing control algorithms such as obstacle avoidance and path planning. Simulation is also being used for generating expectations to guide sensory processing for robots operating in the real world. In this paper, we present the tradeoffs in designing an environment model for outdoor environments. The models for outdoor environments are significantly different from indoor environment models. Outdoor environments are inherently unstructured due to changing lighting conditions, variation in the form of objects, and the dynamic nature of the environment. We present the design tradeoffs involved in building models of such environments, from the perspective of simulating passive sensors like vision. We point out that a powerful approach to outdoor navigation is to have a detailed model of the environment that can provide expectations both in terms of the spatial location of scene entities and the operators suitable for detecting these entities in an image.
Autonomous cross-country navigation requires planning algorithms which supports rapid traversal of challenging terrain while maintaining vehicle safety. The central theme of the work is the implementation of a planning system which solves this problem. The planning system uses a recursive trajectory generation algorithm, which generates spatial trajectories and then heuristically modifies them to achieve safe paths around obstacles. Velocities along the spatial trajectory are then set to insure a dynamically stable traversal. Ongoing results are presented from the system as implemented on the NAVLAB II, an autonomous off-road vehicle at Carnegie Mellon University. The planning system was successful in achieving 5.1 km autonomous test runs with obstacle avoidance on rugged natural terrain at speeds averaging 1.8 m/s. Runs of up to 0.3 km at 4.5 m/s were achieved when only checking for obstacles.
Terrain map building is an essential component of a planning autonomous navigator. External terrain must be represented in a manner that is convenient for the path planning subsystem to use, and that is useful for fine tuning the position estimate. This work is concerned with the solution of the particular problems encountered when attempting high speed navigation of an autonomous vehicle on rough terrain. These problems include the requisite longer downrange field of view, the range shadow problem, the image fusion problem, and the motion of the vehicle during image digitization. Experimental results have been obtained for an all terrain autonomous vehicle testbed--the Navlab II. The perception system was successful in supporting runs of 6.7 kilometers at speeds averaging 1.8 meters/second while checking for obstacles, 5.1 kilometers at 1.8 meters/sec while avoiding obstacles, and 0.3 kilometers at 4.5 meters/second while checking for obstacles.
Tracked mobile robots are well suited for operation in uneven and obstacle ridden environments. A great majority of the current research in motion planning and obstacle avoidance however involves either wheeled or legged mobile systems. In our research we have been developing tracked mobile-manipulator systems for applications in hazardous environments. A Simulation, Animation, Visualization and Interactive Control environment has been developed for the design and operation of an integrated robotic manipulator systems. It is characterized by three unique and useful features. (1) Simulation and Visualization of Robot Motion, (2) Virtual and Real World Transformations, and (3) Sensor-based Manipulation and Reactive Obstacle Negotiating Strategies. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
Multi-degree-of-freedom (MDOF) vehicles have many potential advantages over conventional (i.e., 2-DOF) vehicles. For example, MDOF vehicles can travel sideways and they can negotiate tight turns more easily. In addition, some MDOF designs provide better payload capability, better traction, and improved static and dynamic stability. However, MDOF vehicles with more than three degrees-of-freedom are difficult to control because of their overconstrained nature. These difficulties translate into severe wheel slippage or jerky motion under certain driving conditions. In the past, these problems limited the use of MDOF vehicles to applications where the vehicle would follow a guide-wire, which would correct wheel slippage and control errors. By contrast, autonomous or semi-autonomous mobile robots usually rely on dead-reckoning between periodic absolute position updates and their performance is diminished by excessive wheel slippage. This paper introduces a new concept in the kinematic design of MDOF vehicles. This concept is based on the provision of a compliant linkage between drive wheels or drive axles. Simulation results indicate that compliant linkage allows to overcome the control problems found in conventional MDOF vehicles and reduces the amount of wheel slippage to the same level (or less) than the amount of slippage found on a comparable 2-DOF vehicle.
The control of the hopping height in a one-legged machine is studied. The general aim is to decrease sensitivity of the hopping height to drifts in the machine's parameters. The proposed approach combines a near-inverse controller which uses height feedback, with a recursive least-squares parameter estimator which continually tunes the controller. The paper presents the mechanical design of the hopping leg, as well as simulations of its performance in tracking a piecewise-constant height reference. These simulations suggest that the resulting algorithm is both computationally feasible and robust to disturbances and drifts.
Robots operating in unknown environments need to acquire knowledge about the environment to facilitate navigation and other tasks. Control procedures that enable a robot to acquire environment knowledge are referred to as mapping our terrain-acquisition procedures. All motion control procedures are influenced by the underlying models. In this area the significant models are: the robot model, the sensor model and the environment model. In this paper we present a mapping algorithm that uses a sensor model based on a proximity detector. The sensor is capable of detecting the presence or absence of objects within an annular ring of radius R to R + (Delta) , surrounding the robot. The sensor provides no exact range information but does indicate that angular region of contact with an obstacle. The robot is modelled as a disc of radius r (r
For a mobile robot to travel as rapidly as possible through a large-scale environment, it must rely upon stored knowledge of that portion of the environment that is occluded or otherwise beyond sensor range. Yet low mechanical accuracy and consequent positioning errors characteristic of mobile robots make it difficult in practice for a mobile robot to acquire an accurate metrical description of large scale space or accurately track a trajectory expressed in a global coordinate frame. Hence, methods for generating optimal feedforward control are not practical for application to high speed control of an autonomous mobile robot. This paper addresses the problem of acquiring and applying approximate information about environment geometry for high speed control of a mobile robot in circumstances of limited positioning accuracy. The robot combines relatively accurate information about the layout of nearby obstacles from current sensor information with approximate knowledge of the relative position of learned safe trajectory targets to travel to a goal at high speed. This method enables the robot to anticipate obstacle geometry beyond sensor range, yet can cope with substantial position estimation errors, disturbances, and minor changes in the environment.
Certainty grids have been shown to be an effective method of generating accurate map data from incomplete sensor data. This is put to the test as we use a robot based certainty grid to maintain map information generated from eight fixed sonars to compare three robot navigators. Due to the low resolution and single sensor type, the certainty grid includes a variety of averaging and weighting techniques to improve sonar accuracy and reduce noise. The navigators are constrained by two design parameters: they should not use domain specific knowledge and the navigators and mapper are independent. Navigation decisions are based solely on the internal map. Each navigator uses a weighting function to determine a potential for each grid element and navigates by minimizing the potential over the robot's immediate surroundings. Local route selection is performed in real time while traveling as the local navigator continuously re-evaluates the path with new information from the certainty grid. The navigators differ in their methods of global route selection. One uses intermediate destinations and backtracking to handle dead ends. The other two incorporate dead end information directly into local route selection, one with intermediate destinations and the other without them.