The majority of existing robotic systems can be decomposed into five distinct subsystems: locomotion, control/man-machine interface (MMI), sensors, power source, and manipulator. When designing robotic vehicles, there are two main requirements: first, to design for the environment and second, for the task. The environment can be correlated with known missions. This can be seen by analyzing existing mobile robots. Ground mobile systems are generally wheeled, tracked, or legged. More recently, underwater vehicles have gained greater attention. For example, Jason Jr. made history by surveying the sunken luxury liner, the Titanic. The next big surge of robotic vehicles will be in space. This will evolve as a result of NASA's commitment to the Space Station. The foreseeable robots will interface with current systems as well as standalone, free-flying systems. A space robotic vehicle is similar to its underwater counterpart with very few differences. Their commonality includes missions and degrees-of-freedom. The issues of stability and communication are inherent in both systems and environment.
Choosing the best power and propulsion systems for mobile robotic land vehicle applications requires consideration of technologies. The electric power requirements for onboard electronic and auxiliary equipment include 110/220 volt 60 Hz ac power as well as low voltage dc power. Weight and power are saved by either direct dc power distribution, or high frequency (20 kHz) ac power distribution. Vehicle control functions are performed electronically but steering, braking and traction power may be distributed electrically, mechanically or by fluid (hydraulic) means. Electric drive is practical, even for small vehicles, provided that advanced electric motors are used. Such electric motors have demonstrated power densities of 3.1 kilowatts per kilogram with devices in the 15 kilowatt range. Electric motors have a lower torque, but higher power density as compared to hydraulic or mechanical transmission systems. Power density being comparable, electric drives were selected to best meet the other requirements for robotic vehicles. Two robotic vehicle propulsion system designs are described to illustrate the implementation of electric drive over a vehicle size range of 250-7500 kilograms.
SURVEYOR is a mobile tracked vehicle designed for operation in hazardous environments. This paper describes the design and implementation of a microprocessor-based remote control system developed for this vehicle. The main objective in designing the remote control system was to allow the vehicle to be controlled from a number of joysticks and switches mounted on a supervisory console. Both the supervisory console and the vehicle were to be located within an industrial environment, up to one quarter mile apart. Audio and stereoscopic video returned to the supervisory station would allow an operator to "teleposition" himself to the vehicle's environment. An emphasis was placed on finding a solution which was robust, low power, and expandable, while still being relatively inexpensive to develop. Future expansions which might be added include increased control information to be sent to and interpreted by the vehicle, increased digital data to be sent from the vehicle, and provisions for autonomous operation.. The solution meets all of these objectives well. An 8-bit, single board micro-computer, having a 16-channel analog-to-digital converter, over 100 input/output latches, and having a current draw of only 100 milliamps was selected as the heart of the control system onboard the vehicle. In addition to the single board microcomputer, a custom designed signal conditioning and control interface was required to house active filters, relays, and power transistors. Off-the-shelf, variable-speed motor controllers and radio frequency modems were used to complete the vehicle electronics package. Another 8-bit microcomputer in the supervisory console interprets the joystick and switch settings at the console, sends this information to the vehicle, and overlays the digital data received from the vehicle onto the video images received from the vehicle.
The development of the Odetics walking machine technology is presented from the original concept aimed at feasibility demonstration through advanced designs with specific mission applications. The high power efficiency and high strength-to-weight ratio features of the original leg designs are presented along with the hierarchical control concepts. The evolutionary development of improved gait control for faster, smoother walking, and the demands imposed by uneven terrain and stair climbing are discussed. Sensor integration for motion control and vision for teleoperation are covered, as is operator control station design. Specific walker design concepts to accomplish nuclear power plant maintenance and a Mars Rover mission are presented. The nuclear power plant design integrates a six-degree of freedom manipulator arm onto an improved design walker with a fiber-optic link to the operator control station. The Mars Rover mission concept is aimed at maximum packaging density, light weight and high mobility on steep and soft terrain while minimizing power consumption.
A power electronics system for a mobile robot with ±12 volt batteries was developed, starting with a basic 3-wheel chassis with on-off drive motors, pneumatic tires, and manu-ally steered nosewheel. Power steering was provided by installing a suitable motor, gearbox, roller-chain, and sprockets. A steering servo was realized by adding a differential power amplifier and feedback potentiometer. Using an input pulse train of variable duty cycle, an independently controllable power module for each of the two drivewheel motors was designed for proportional no-load speed response. Circuit details and test results are presented.
Imagine, if you will, that all human beings suddenly disappeared: Literally: Needless to say, some catastrophic events would occur. Automobiles and airplanes would crash, machinery would grind to a halt, meals would burn, etc. Some things, however, would continue in their present state for a while. Automatic devices would operate on their own for a while and power supplies would continue to provide energy for lights and other devices. At some point, maybe years down the road, all man-made devices and systems would halt. Now, one may argue that some low-power devices such as a microchip in a hand calculator somewhere or a space probe may continue for many years, but in an engineering sense there will be point when manmade activity is essentially zero. Now, imagine that instead of performing this "thought experiment' at the present time, it were to take place 100 years in the future. This stretches the imagination a bit because now we must conjure up a picture of two things: (1) the instant disappearance of humans, plus (2) what technology will be like one hundred years from now. It is likely that things would not grind to a halt as quickly as they would today. In fact, one might imagine that the degree of automation would be such that many things would continue unaltered in the slightest and might possibly be improved by the removal of humans. It is quite possible that mail would continue to be delivered, "read", and responded to; business meetings held, decisions made, actions taken; products manufactured, commercialized, sold, and brought in for repair; etc. Is this perhaps the subconscious goal of mankind? To build a "world" where humans are expendable?
In this paper, theoretical foundations of planning processes are outlined in a form applicable for design and control of autonomous mobile robots. Planning/control is shown to be a unified recursive operation of decision making applied to a nested hierarchy of knowledge representation. The core of the theory is based upon methods developed in the areas of Post-production systems, theory of coding, and the team theory of decentralized stochastic control. A class of autonomous control systems for robots is defined, and a problem of information representation is addressed for this class. A phenomenon of nesting is analyzed and the minimum c-entropy rule is determined for arranging efficient design and control procedures for systems of intelligent control. A concept of nested hierarchical knowledge-based controller is employed in this paper which enables minimum-time control using nested dynamic programming. An application of this concept is unfolded for a system of knowledge-based control of an autonomous mobile robot. Key words: Autonomous Control Systems, Decision Making, Production Systems, Decentralized Stochastic Control, Dynamic Programming, Hierarchical Control, Knowledge Based Controllers, E-entropy, Planning, Navigation, Guidance, Prediction, Contingencies, Mobile Robots.
In this paper we describe the subsumption architecture, a design methodology for multiprocessor control systems. The architecture is based on a new decomposition of problems into task-achieving behaviors. This decomposition allows us to incrementally construct more competent robots by adding new behaviors. We demonstrate this by showing how two different control systems can be built on top of the same set of core behaviors. We have implemented both systems and have run them on a real mobile robot. The details and performance of each will be discussed.
Very few existing robotic vehicle systems today have the ability to plan a path and then execute it. Subsystems to do parts of this problem do exist. Automatic route planners can plan a path for a theoretical vehicle. Vehicle control systems such as Computer-Aided Remote Driving (CARD)' can drive a vehicle with limited operator intervention. The combination of route planning with the vehicle control system is the concept of Navigation Fusion. Work is under way at the Jet Propulsion Laboratory to fuse the functions of a route planner and a planetary rover prototype which is piloted using CARD. In the CARD system, the operator chooses a path for the vehicle using wide-baseline stereo images returned from cameras on the vehicle. The robotic vehicle then executes that path under computer control without operator intervention. The route is planned on a Symbolics LISP machine and is then displayed to the operator as a desired path. The concept of Navigation Fusion involves displaying this desired path in perspective on the operator's 3-dimensional display. It requires knowledge of the relationship between the images returned from the vehicle and any information about of the movement area (or map). The operator will modify this path as necessary to avoid obstacles. The vehicle will then be instructed to execute the path. The fusion of global route planning and local path execution allows more intelligent planning and execution of robotic vehicle movements.
Reliability and robustness are of primary importance in a mobile robot control system. Widely. varying operating environments place such stringent. demands on the navigation system that no single controller, no matter how sophisticated, is adequate for sustained operation. Consequently sustained operation in the face of changing environmental conditions may be possible only by running several different controllers concurrently. We propose an arbitration mechanism which accepts inputs from multiple controllers, then generates a single optimal result, This arbitor draws upon a knowledge base which defines the limits within which each controller's algorithm will give adequate performance.
An approach is presented that combines dynamical models of 3D motion with geometric models of the scene and the laws of perspective projection to estimate all motion parameters necessary to control a mobile robot vehicle. The approach is demonstrated by autonomous con-trol of a jet propelled air-cushion vehicle, navigating through a technical environment with three degrees of motion freedom and performing a rendezvous maneuver with a passive partner. Features of the partner and other objects in the scene, the 3D shapes of which are known, are looked for and then tracked by the processors of a multimicroprocessor system. A sequential Kalman filter formulation is used to detect and to cope with variable feature visibility due to occlusion and motion while determining the complete relative motion state without inversion of the projection equations. A scheme is developed for always selecting those features for tracking which yield the best state estimate, the quality of which is demonstrated by physical docking with a static partner. The system operates at 0.13 seconds cycle time, half of which is spent for I/O operations. Experimental results are given.
This report describes range data based obstacle avoidance techniques developed for use on an autonomous road-following robot vehicle. The purpose of these techniques is to detect and locate obstacles present in a road environment for navigation of a robot vehicle equipped with an active laser-based range sensor. Techniques are presented for obstacle detection, obstacle location, and coordinate transformations needed in the construction of Scene Models (symbolic structures representing the 3-D obstacle boundaries used by the vehicle's Navigator for path planning). These techniques have been successfully tested on an outdoor robotic vehicle, the Autonomous Land Vehicle (ALV), at speeds up to 3.5 km/hour.
An improved obstacle avoidance algorithm for the reflexive pilot in the FMC autonomous land vehicle is presented. To perform the obstacle avoidance operation, the specia] structure of the local map derived from sonic sensor information is used to find a collision-free path. This is done by first reducing the vehicle to a line model to obtain an equivalent problem with enlarged obstacles, and then dealing with the free space directly.
Recently an approach to the problem of robot navigation in an unexplored terrain was develped by Iyengar, et al, which involves concepts of learning. The method proposed involves representing the terrain as a spatial graph which is updated as the robot undertakes a number of goal-directed traversals. The current paper focuses on the implementation of the spatial graph data structure and Parallel Algorithms needed to maintain the data structure. A modified adjacency list data structure is proposed for representing the spatial graph of the terrain. The model of computation being used is the multiple instruction stream multiple data stream shared memory model MIMD-SMM. The parallel algorithms required to implement the navigation technique (described in the previously mentioned paper) using the modified adjacency list data structure are presented and the time and space complexities of the various algorithms are discussed. The algorithms presented provide the basis for an efficient navigation sytem for an autonomous robot working in an unexplored terrain.
The primary vision task in road-following for a mobile robot is to provide a description of the road environment, including possible obstacles on the road. Techniques are presented for road segmentation and obstacle detection based on color video data. Using constraints on road characteristics in the image space and in 3D color space, the road is extracted and represented by its edges. Assuming vehicle movement, obstacles are detected at a distance and an obstacle avoidance mode is entered.
The Autonomous Land Vehicle (ALV) must be able to navigate over a wide variety of terrain, using its visual sensors to recognize objects and correct its path. It can rely on three important resources for information about its environment: a time-of-flight range sensor, an inertial guidance system that provides accurate dead-reckoning information, and a knowledge base that specifies the locations, shapes, and appearances of objects in the world. This paper addresses the problems in integrating these resources into a closed-loop control system that can adaptively correct the ALV's route through the environment.
This paper describes a real-time road following and road junction detection vision system for autonomous vehicles. Vision-guided road following requires extracting road boundaries from images in real-time to guide the navigation of autonomous vehicles on the roadway. We use a histogram-based pixel classification algorithm to classify road and non-road regions in the image. The most likely road region is selected and a polygonal representation of the detected road region boundary is used as the input to a geometric reasoning module that performs model-based reasoning to accurately identify consistent road segments and road junctions. In this module, local geometric supports for each road edge segment are collected and recorded and a global consistency checking is performed' to obtain a consistent interpretation of the raw data. Limited cases of incorrect image segmentation due to shadows or unusual road conditions can be detected and corrected based on the road model. Similarly, road junctions can be detected using the same principle. The real-time road following vision system has been implemented on a high-speed image processor connected to a host computer. We have tested our road following vision system and vehicle control system on a gravel road.
An efficient method for guiding high speed land vehicles along roadways by computer vision has been developed and demonstrated with image sequence processing hardware in a real-time simulation loop 1. The approach is tailored to a well structured highway environment with good lanemarkings. Contour correlation and high order world models are the basic elements of the method, realised in a special multi-microprocessor (on board) computer system. Perspec-tive projection and dynamical models (Kalman filter) are used in an integrated approach for the design of the visual feedback control system. By determining road curvature explicity from the visual input, previously encountered steady state errors in curves are eliminated. The performance of the system will be demonstrated by a video film. The operation of the image sequence processing system has been tested on a typical Autobahn-scene at velocities up to 100 km/h.
This paper describes a road following system for an Autonomous Land Vehicle. Range data is used as the sensor input. The system is divided into two parts: low-level data driven analysis followed by high-level model-directed search. The sequence of steps performed, in order to detect 3-D road boundaries, is as follows: Range data is first converted from spherical into Cartesian coordinates. A quadric (or planar) surface is then fitted to the neighborhood of each range pixel, using a least square fit method. Based on this fit, minimum and maximum principal surface curvatures are computed at each point to detect edges. Next, using Hough transform techniques 3-D local line segments are extracted. Finally, model directed reasoning is applied to detect the road boundaries.
The use of range imagery and new-processing techniques for multi-class object invariant recognition are presented here. Techniques investigated involve different hierarchical processors with emphasis given to processors using correlation filters and histogram processing. The use of range imagery is shown to significantly improve classification in many cases especially with severe distortions in the object orientation. Histogram operations are appropriate for providing aspect angle estimates with aircraft objects. Initial experimental results are shown for each technique and with several data bases.
For a patrol robot, it is important to select optimal viewpoints and arrange corresponding navigation routes which are as short as possible. In general, this problem is NP-hard. This paper describes two heuristic approaches for planning viewpoints in 2-dimensions. The first O(N2 log N) approach is based on the static partition of a weakly simple polygon into star-shaped polygons. The second O(N2log N) approach is characterized by the sequential selection of viewpoints; this results in covering the edges of a weakly simple polygon with star-shaped polygons and may require fewer viewpoints than the first approach. For a patrol robot, it is necessary to reorder the sequence of the viewpoints and arrange the navigation route accordingly. By decomposing the free spaces into simpler components and connecting viewpoints with pass-ways, the problem of arrangement of the navigation route can be reduced to the well known NP-hard Traveling Salesman problem. Thus, it can be solved by one of the known approximation algorithms, such as Christofide's Heuristic algorithm in O(N3) time.
This paper presents a recent development of the dynamic omnidirectional vision navigation technique which uses a fish-eye lens to map an entire scene of the surroundings. A beacon recognition approach based on the omni-vision characteristics is described. The navigation software integrating the beacon recognition with robot positioning is provided for practical application. The results of preliminary experiments are presented in this paper. The development of the method will lead to practical navigation module for mobile robots.
One of the most significant issues hindering the development of autonomous mobile robotic systems today is the lack of appropriate sensors for collecting high-resolution geometric information describing the robot's surroundings. Scanning laser rangefinders have found use in this regard, primarily in outdoor scenarios, and numerous configurations are being developed by several companies. The physical size, power consumption, and high initial cost of these prototype units make them somewhat impractical for near-term utilization on the smaller platforms typical of indoor applications. Simpler and much less expensive ultrasonic ranging systems have been employed with some limited success, principally for purposes of collision avoidance. Such systems suffer markedly, however, from numerous problems associated with the temperature dependence of the speed of sound in air, extremely slow wave propagation velocities, specular reflection of the emitted wavefront, and especially poor angular resolution due to beam divergence. This article discusses the design and testing of a low-cost, programmable near-infrared proximity detector intended specifically to address some of those problems when used for navigational purposes in a complementary fashion with acoustical ranging systems. The experimental prototype was installed on a mobile robot, and both sonar and near-infrared data were captured on various scans as the robot was moved through a room full of obstacles. By combining the best features from each sensor, a representation can be built that is more accurate than if either sensor were used alone. A second representation, based on the curvature primal sketch, can then be obtained from this refined map. This allows for matching between views, and also for creating an interface to a higher level program, such as a path planner.
Mobile robots may use naturally occurring or man-made landmarks to navigate over different types of terrain. Many navigational techniques require the simultaneous detection of two or more landmarks in order to update deduced reckoning position estimates, requiring a high density of identifiable reference points. Analytical methods and formulae are presented which permit navigation using a single landmark, given sensorderived azimuth angle, elevation angle, or range information. Experimental results using an infrared beacon are discussed, together with predictions of required sensor resolution for a specified positional error range. The results indicate that the need for identifiable landmarks may be greatly reduced, while the use of local reference frames could facilitate symbolic processing of path data.
The VISIONS research environment at the University of Massachusetts provides an integrated system for the interpretation of visual data. To provide a testbed for many of the algorithms developed within this framework, a mobile robot has been acquired. A multi-level representation and the accompanying architecture used to support multi-sensor navigation (pre-dominantly visual) are described. A hybrid vertex-graph free-space representation (meadow map) based upon the decomposition of free space into convex regions capable for use in both indoor and limited outdoor navigation is discussed. Of particular interest is the capability to handle multiple terrain types. A hierarchical path planner that utilizes the data avail-able in the above representational scheme is described. An overview of the UMASS mobile robot architecture (AuRA) is presented.
The vision group at the University of Oulu is investigating problems in 3-D world modeling and visual indoor navigation. This paper describes a test vehicle developed for the research. The three wheeled vehicle is propelled by a pulse-width controlled DC motor and steered by a stepping motor. The vehicle is equipped with primary and secondary sensors. Visual input consisting of a three-camera stereo system is used as the primary sensor for navigation and secondary sensors provide information about the internal state of the vehicle.
The knowledge-based control of autonomous vehicles allows efficient hierarchical structures that utilize linguistic sensory data at various levels of resolution and exactness. This is mainly due to the fact that the control is based on a collection of rules, rather than an analytical controller. Each rule in the controller prescribes the control for a specific situation. In this paper an experimental method of control rule derivation is described. A two stage identification process is employed. First, the structure of the mobile robot is determined and the state variables are defined. Then the relations that are essential for the derivation of optimal control rules are found by experimentation. A minimum-time control algorithm for mobile robot position control is also included in the paper.
Recent work on autonomous navigation at Carnegie Mellon spans the range from hardware improvements to computational speed to new perception algorithms to systems issues. We have a new vehicle, the Navlab, that has room for onboard researchers and computers, and that carries a full suite of sensors. We have ported several of our algorithms to the Warp, an experimental supercomputer capable of performing 100 million floating point operations per second. Our perception now uses adaptive color classification for road tracking, and scanning laser rangefinder data for obstacle detection. We have completed the first system that uses the CMU Blackboard for scheduling, geometric transformations, inter and intra machine communications.
One crucial component of a control system for autonomous vehicle guidance is real time image analysis. This system part is burdened by the maximum flow of information. To overcome the high demands in computation power a combination of knowledge based scene analysis and special hardware has been developed. The use of knowledge based image analysis supports real time processing not by schematically evaluating all parts of the image, but only evaluating those which contain relevant information. This is due to the fact that in many practical problems the relevant information is very unevenly distributed over the image. Preknowledge of the problem or the aim of the mission and expectations or predictions about the scene sustantially reduce the amount of information to be processed. The operations during such an analysis may be divided into two classes - simple processes, e.g. filters, correlation, contour processing and simple search strategies - complex search and control strategy This classification supplied the concept for a special hardware. The complex tasks are performed by a universal processor 80286 while the remaining tasks are executed by a special coprocessor (including image memory). This combination permits the use of filter masks with a arbitrary geometry together with a powerful search strategy. A number of these basic modules may be configured into a multiprocessor system. The universal processor is programmed in a high level language. To support the coprocessor a set of software tools has been built. They permit interactive graphical manipulation of filtermasks, generation of simple search strategies and non real time simulation. Also the real data structures that control the function of the coprocessor are generated by this software package. The system is used within our autonomous vehicle project. One set of algorithms tracks the border lines of the road even if they are broken or disturbed by dirt. Also shadows of bridges crossing the road are tolerated. Another algorithm tracks prominent points on other objects (e.g. vehicles) to collect possible candidates of obstacles during the real time run. A complete image analysis for the relevant features is performed in one video cycle (16.6 ms).
In this paper, we present a method for the absolute localization of a mobile robot, moving through a flat environment cluttered with cylindrical obstacles of any shape. The robot has in memory a list of coordinates of some points used as a map of the environment, and is provided with a laser rangefinder giving range distances to the obstacles in several known directions. The obstacles themselves are used as references of position, that eliminates the need for landmarks. First we explain the method, looking like the Hough methods, deducing the absolute position and the attitude of the robot, by matching the range measurements to the environment map. The quality of the matching between the two lists of points is eva-luated through a test of accumulation. The rapidity and the precision of the method are studied in terms of the number of range measurements. Finally we show that the use of the relative localization, estimated by odometry, allows considerable speed up of the algorithm and reduces simultaneously the number of range measurements. Then the localization can be processed without stopping the robot. The absolute localization periodically computed, allows the resetting of the position estimated by odometry, and both systems form a means of localization allowing the robot to be guided along any trajectory.
The primary target of this study is to analyze the efficiency of optimal control policies for an AGV (Automatically Guided Vehicle) setting. As AGV's assume more of the work load on a state-of-the-art factory floor, a variety of nonobstructive communication means with them will be needed to issue complex guidance commands. One of these means, an optical link is taken into account in this work. A line array of lights is planned to deliver coded control commands to the AGV during its operation on the shop floor. The objective is to position the robotic vehicle properly in order to recover the coded information from the light sources. The direct and inverse kinematics problems are solved first. Because of redundancy control, a number of optimality criteria are introduced to emphasize preference. A comparative study to classify the efficiency of each one of these criteria is presented. Multiple parameters governing the motion are chosen toward an optimum trajectory. The way in which the optimum path changes as the weighting factors of the objective function varies is another point of study covered in this report.
In this note we describe an architecture that integrates low-level processing of MMW radar data; intermediate-level processing based on contextual heuristics; and high-level processing driven by domain knowledge. The overall system objectives are to identify roadlike structures in the illuminated scene; and to match them to a world knowledge model to complete the estimated view of local terrain. The proposed system has applications in autonomous vehicle navigation and emer-gency landing situations.
This paper addresses the critical issues of sense acquisition and sense analysis, using multiple Obstacle Avoidance (OA) sensors, for Autonomous Underwater Vehicles (AUVs). Currently, an AUV research and development testbed is being engineered at Martin Marietta Baltimore Aerospace, and the pertinent research on OA sensing that may be applied to such a testbed is presented. The complexities of relevant and conventional navigation systems for undersea vehicles are also discussed. Using a multitude of sensors and the Zonal-Spot Environmental Analysis (ZSEA) sensing technique, the spatial scenarios are characterized cogently by amalgamating the sensor information to form a description of the external world, which in turn are preserved in a world model database. The ZSEA sensing strategy performs sensor-level processes to recognize obstacles with certain levels of surety, based on apriori semantic data, and thereby provides safer and more efficient path planning capabilities to the AUV. Furthermore, the ZSEA sensing strategy ameliorates sensory deprivation and paves the way for the incremental enhancement of the control process at the higher levels without having to modify the lower servocontrol levels.
This paper presents a system which moves a robot through Cartesian space in the presence of objects which are also moving, in predictable trajectories. The method presented calculates a piecewise linear path with piecewise constant velocities. This is done by assuming a straight-line path from the start to the goal, and recursively planning smaller subpaths if collisions are detected. Two methods for generating subgoals are presented and compared. The first is a blind method employing little information on the location of the objects at collision time. The second is a method using Khatib's artificial potential functions for locating the subgoal. The algorithm currently is implemented in two dimensions, and represents an arbitrary number of objects by their bounding circles. The robot is represented by a point. The algorithm guarantees that all movement of the robot will take place in a specified workspace. In addition the robot is required to stay on a specified time schedule, within a certain tolerance. Obstacle trajectories are constrained to be represented as quadratic parametric equations in time. The paper points out some obvious extensions to this work. These include extensions to three dimensions, fine-motion planning with complex shapes, and use of the system with objects following unknown trajectories.
The objective of this work is to investigate the security requirements and strategies for intelligent mobile robots, perform tests to determine strengths and weaknesses of test bed platforms, and develop counter strategies to improve security of the test bed platforms. This research will discuss the implications of these results on large scale ongoing efforts in mobile robotics. Potential security threats range from accidental intrusion of the device's hardware or software by untrained personnel to deliberate "spoofing" of sensor suites by unauthorized users or enemies.
Exploration of the planets has been a long standing mission of the Jet Propulsion Laboratory (JPL). With the Presidentially appointed "National Commission on Space" proposing the eventual colonization and exploitation of the planets as a national policy, the idea of using a planetary rover to explore Mars is gaining interest. Plans are presented here for a potential Mars sample return mission. Major emphasis is placed on the application of work currently being done in Computer Aided Remote Driving and Navigation Fusion technologies at JPL.
This paper describes the design of an autonomous vehicle and the lessons learned during the implementation of that complex robot. The major problems encountered to which solutions were found include sensor processing bandwidth limitations, coordination of the interactions between major subsystems, sensor data fusion and system knowledge representation. Those problems remaining unresolved include system complexity management, the lack of powerful system monitoring and debugging tools, exploratory implementation of a complex system and safety and testing issues. Many of these problems arose from working with underdeveloped and continuously evolving technology and will probably be resolved as the technological resources mature and stabilize. Unfortunately, other problems will continue to plague developers throughout the evolution of autonomous system technology.
A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.
Until the late 1950's there was an almost exclusive use of graphical methods in the kinematic analysis and synthesis of mechanisms. With the increasing availability of high-speed digital computers there was a major reawakening of interest in a field con-sidered by many to be mature and dormant. Today the birth of advanced robotic systems has heightened the interest in applications of kinematic analysis and synthesis. This paper discusses the application of kinematics to the path planning model of a three wheeled, self-steering, mobile robot. The problem is formulated through a kinematic analysis of the path curvature and the synthesis of a set of equivalent linkages. The equivalent linkages facilitate a direct relationship between the path and the robot steering control mechanism. This is accomplished through the kinematic param-eters displacement, velocity and acceleration. The mathematical model is presented in a generalized format with the robot represented as a rigid body. Examples are shown which solve the problem of moving the robot through a crowded environment and arriving at its destination in a predetermined orientation.