We present a parallel processing approach to path planning in unknown terrains which combines map-based and sensor-based techniques into a real-time capable navigation system. The method is based on massively parallel computations in a grid of simple processing elements denoted as cells. In the course of a relaxation process a potential distribution is created in the grid which exhibits a monotonous slope from a start cell to the cell corresponding to the robot''s goal position. A shortest path is determined by means of a gradient descent criterion which settles on the steepest descent in the potential distribution. Like high-level path planning algorithms our approach is capable of planning shortest paths through an arbitrarily cluttered large-scale terrain on the basis of its current internal map. Sequentially implemented its complexity is in the order of efficient classical path planning algorithms. Unlike these algorithms however the method is also highly responsive to new obstacles encountered in the terrain. By continuing the planning process during the robot''s locomotion information about previously unknown obstacles immediately affects further path planning without a need to interrupt the ongoing planning process. New obstacles cause distortions of the potential distribution which let the robot find proper detours. By ensuring a monotonous slope in the overall distribution we avoid local minimum effects which may trap a robot in the proximity of an obstacle configuration before it has reached its goal. 1 Until the recent past research on path planning in the presence of obstacles can be assigned to two major categories: map-based high-level planning approaches and sensor-based low-level conLrol approaches. In work such as 12 path planning is treated as a high-level planning task. Assuming that an (accnrae) precompiled map of the terrain is available high-level path planners provide paths which guarantee a collision-free locomotion through an arbitrary ensemble of known obstacles. An essential advantage of high-level path planning is the property of compleleness: path planning does not terminate until a satisfactory path is found provided there exists one. To base a navigation system which is supposed to guide a robot through an unknown terrain solely on mapbased high-level planning is not advisable for obvious reasons however. By refering to a precompiled map such a system would have severe problems to cope with situations where the real world has changed and deviates from the robot''s map. This goes back to a basic architectural principle which clearly separates the planning task from the execution of a plan and provides only a limited feed back from execution to planning. The interaction with the real world is the concern of the low-level conirol part of the system whose task is limited to the execution of elementary operations guiding the robot along the precisely specified path. The performance of the overall navigation system is bound by
Autonomous navigation capabilities need to be developed if mobile robots are to be used effectively in complex environments. This paper summarizes the further development and implementation of a system for model based autonomous navigation of mobile robots within complex structures. The developed system architecture addresses the complete navigation problem from high level path planning using facility data to lower level autonomous movement through uncertain local environments. A 3D laser imager is used as the primary sensor. It returns real-time detailed range and reflectance images. The system was developed for the application of navigating a mobile teleoperated robot to a remote work site in a nuclear power plant. An overhead gantry and physical models of nuclear power plant Structures are used as a testbed for 3D navigation algoritbms. Navigation architecture algorithms and test results are presented.
A technique is developed whereby a mobile robot equipped with sonar sensors autonomously explores a hallway environment and during exploration dynamically builds two types of maps: a graph of places defined by distinctive sonar events and a grid map from dead reckoning data that is accurate in the neighborhood of a place. With both maps available the robot can quickly plan a path between arbitrary locations and then define a sequence of behaviors that will move the robot along the selected path. Robust performance is achieved by dividing the computational processes into two parallel operations. Time-critical low-level behaviors like driving and steering in the exploratory mode are controlled by an onboard computer that uses sonar data as input to simple subsumption-based algorithms. 1 Higher level more computationally intense and less-time-critical activities like place designation map making display generation and path planning are performed in parallel on a remote computer that fetches sonar data and issues high-level commands via a radio link. An approach to integrating this work with efforts in planning and navigation to form a larger activity in intelligent vehicle research is discussed in a companion paper.
The obstacles are treated as stationary obstacles providing the input to a path planning algorithm. It is assumed that the vehicle is equipped with a TV camera which scans the navigational space and provides information about the obstacles. The sensing system samples the observation space at predetermined time intervals. Due to the relative motion of the camera system with respect to the environment a''two-dimensional image flow is created on the image plane. The appareance of an obstacle can be recovered by means an optical flow analysis of the regions corresponding to surfaces having different depths.
An algorithm for modifying a stored nominal robot trajectory path is presented. This method uses data from an arbitrary collection of " one-dimensional" sensors such a sensor returns a scalar position along a fixed vector in space. Examples of such sensors are fixed or robot-mounted proximity probes linear contact probes robotic touch sensing and CCD cameras in conjunction with structured light. The method is not suitable for real-time guidance all the data must be provided at once. The method reassigns the locations of the taught points of the path the spatial resolution of the taught points and the sensing locations must be sufficiently fine to capture any expected shape variation. No physical model of the part or process is required allowing simple and rapid application. The action of this algorithm is quite robust and allows for noise reduction through averaging ofall sensor data. Two heuristic assumptions guide the use of this method: taught points are more affected by nearby sensors than those further away and taught points are moved along the general direction of the sensor vectors. Several parameters allow adjustment of the adaptive action. The algorithm uses a closed-form calculation.
This paper introduces HIMM (histogramic in-motion mapping) a new method for real-time map building with a mobile robot in motion. HIMM represents data in a two-dimensional array (called a histogram gjid) that is updated through rapid continuous sampling of the onboard range sensors during motion. Rapid in-motion sampling results in a statistical map representation that is well-suited to modeling inaccurate and noisy range-sensor data. HIMM is integral part of an obstacle avoidance algorithm and allows the robot to immediately use the mapped information in real-time obstacleavoidance. The benefits of this integrated approach are twofold: (1) quick accurate mapping and (2) safe navigation of the robot toward a given target. HIMM has been implemented and tested on a mobile robot. Its dual functionality was demonstrated through numerous tests in which maps of unknown obstacle courses were created while the robot simultaneously performed real-time obstacle avoidance maneuvers at speeds of up to 0. 78m/sec.
In this paper we describe a method of path planning that integrates terrain classification (by means of fractals) the certainty grid method of spatial representation Kehtarnavaz Griswold collision-zones Dubois Prade fuzzy temporal and spatial knowledge and non-point sized qualitative navigational planning. An initially planned (" end-to-end" ) path is piece-wise modified to accommodate known and inferred moving obstacles and includes attention to time-varying multiple subgoals which may influence a section of path at a time after the robot has begun traversing that planned path.
The automation of rotorcraft low-altitude flight presents challenging problems in control imaging sensors and image understanding. A critical element in this problem is the ability to detect and locate obstacles in a helicopter''s intended flight path and using on-board sensors modify the nominal vehicle trajectory to avoid contact with the detected obstacles. This paper describes the validation facility being used at Ames to test vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight and presents some early validation results.
An algorithm working on monocular gray-scale image sequences for object detection combined with a road tracker is presented. This algorithm appropriate for the real-time demands of an autonomous car driving with speeds over 40 km/h may be used for triggering obstacle avoidance maneuvers such as coming to a safe stop automatically in front of an obstacle or following another car. Moving and static objects have been detected in real-world experiments on various types of roads even under unfavorable weather conditions. . Morgenthaler and
Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.
Trap recovery is required for autonomous mobile robots that sense and avoid local obstacles as they maneuver along a goal-oriented path. We have developed a heuristic search-based recovery algorithm that is invoked immediately whenever a potential trap situation is identified. This algorithm generates intermediate short distance target points (via points) that are used as temporary goals to guide the robot out of the trap situation and allow it to reach a final destination without " stopping-then-thinking" whenever traps are encountered. Real-time navigation is achieved by reacting quickly to the local sensor-detected feedback information and employing on-line reasoning and planning to provide trap recovery. I.
For most autonomous land vehicle tasks creating the terrain representation is the greatest part of the problem. For example once a road following system represents the terrain presented to it as road and non-road it is relatively easy to plan a path through the terrain. However offroad navigation does not have the luxury of such a compact representation. An off-road planner needs a detailed map of the terrain and needs an efficient way of querying that terrain map. We have implemented a system that satisfies these two constraints for off-road navigation. We first build a Cartesian elevation map from a series oflaser range finder images. This map is a complete but intractable representation of the terrain. We use the map to build a hierarchical representation of the terrain that we call a " terrain pyramid. " Each cell at a level of the terrain pyramid holds the maximum and minimum elevation of the four cells in the level below it. We also build pyramids for various features in the Cartesian map such as terrain discontinuity and slope. The terrain pyramids are shipped to a planner module. We provide the planner module with calls to find the minimum and maximum values of a feature over any rectangle in the terrain. With these calls taking advantage of the hierarchical representation of the terrain the planner can efficiently determine a safe path through the
Autonomous road following is a domain which spans a range of complexity from poorly defined unmarked dirt roads to well defined well marked highly struc-. tured highways. The YARF system (for Yet Another Road Follower) is designed to operate in the middle of this range of complexity driving on urban streets. Our research program has focused on the use of feature- and situation-specific segmentation techniques driven by an explicit model of the appearance and geometry of the road features in the environment. We report results in robust detection of white and yellow painted stripes fitting a road model to detected feature locations to determine vehicle position and local road geometry and automatic location of road features in an initial image. We also describe our planned extensions to include intersection navigation.
SCARF is a color vision system that can detect roads in difficult situations. The results of this system are used to drive a robot vehicle the Navlab on a variety of roads in many different weather cOnditions. Specifically SCARF has recognized roads that have degraded surfaces and edges with no lane markings in difficult shadow conditions. Also it can recognize intersections with or without predictions from the navigation system. This is the first system to be able to detect intersections in color images without a priori knowledge of the intersection shape and location. SCARF uses Bayesian classification a standard pattern recognition technique to determine a road surface likelihood for each pixel in a reduced color image. It then evaluates a number of road and intersection candidates by matching an ideal road surface probability image with the results from the Bayesian classification. The best matching candidate is passed to a simple path planning system which navigates the robot vehicle on the road or intersection. This paper describes the SCARF system in detail and presents some results on a variety of images and discusses the Navlab test runs using SCARF.
Navigation of a mobile robot in the presence of dynamic obstacles requires fast data collection routemonitoring and route replanning. A method for route replanning in the presence of dynamic obstacles and an extendible autonomous vehicle navigation architecture based on this method is presented. At the center of our architecture is the Dynamic World Modeling subsystem which maintains an internal representation of the environment. It continuously assimilates time-varying stereo-based range data to build an accurate model of the world. The primary user of this information the Navigator subsystem plans a path for the vehicle to follow and insures that this path remains valid by computing detours when environmental changes block the planned path. A preliminary version of our autonomous navigation system utilizing a near real-time stereo range detection module is operational in our laboratory. It is used to navigate a Denning MIRV-2 mobile platform and executes in a distributed fashion on a network of Sun workstations.
HelpMate is a mobile robotic materials transport system that performs fetch and carry tasks at Danbury Hospital. It autonomously navigates along the main arteries of the hospital crossing between buildings via interconnecting hallways and uses radio to communicate with the elevator. HelpMate uses sensor information to follow straight paths down hallways and to successfully reach its destination. Also HelpMate has the ability to sense obstacles in its path and plot a course around them. A two part vision system is employed on HelpMate to aid in navigation and obstacle avoidance. One part determines the vehicle''s orientation and position in the hallway by looking at the ceiling lights. The other part uses structured light projecting forward into the immediate path of the robot to detect obstacles.
In motion vision the problem is to find from a sequence of time varying images the relative rotational and translational velocities between a viewer and an environment as well as the shape of objects in the environment. This paper introduces a direct method called fixation for solving the general motion vision problem. This method results in a constraint equation between the translational and rotational velocities that in combination with the Brightness-Change Constraint Equation solves the general motion vision problem arbitrary motion relative to an arbitrary environment. Avoiding correspondence and optical flow has been the motivation behind the direct methods because both solving the correspondence problem and computing the optical flow reliably have proven to be rather difficult and computationally expensive. Recently direct motion vision methods which use the image brightness information such as temporal and spatial brightness gradients directly have used the Brightness-Change Constraint Equation for solving the motion vision problem in special cases such as Known Depth Pure Translation or Known Rotation Pure Rotation and Planar World. In contrast to those solutions the fixation method does not put such severe restrictions on the motion or the environment.
Typical stereo algorithms produce sparse depth maps. The depth points often lie on object boundaries and thus obstacles cannot be distinguished from holes. Surface reconstruction algorithms work poorly with such sparse and irregularly spaced data points. We present an algorithm which uses intensity images and a sparse depth map to produce a depth map which can be used for navigation. Our approach is based on an intensity image segmentation algorithm followed by local surface fitting. The algorithm is fast and thus suitable for high speed navigation. Results using laboratoty images are presented.
The LaneLok algorithms use a monochrome CCD video camera to detect lane markers on the road. These algorithms also estimate the lane boundary position the vehicle heading error the lateral offset and the look-ahead road curvature. The current LaneLok algorithms can identify lane markers and some obstacles up to only 40 feet. The lane control and collision warning functions of advanced automotive systems need a range of at least 100 feet for robust performance. A new algorithm was developed for extending this range to 100 feet using data from a single camera. Image data from the extended range was centered around the vertical vanishing point coordinate and enlarged in size using image processing techniques. A template matching algorithm was used to estimate lane boundaries of the near range (40 feet) and of the enlarged image. The two lane boundaries were then concatenated by a polynomial fit at the end points thus extending the lookahead range to 100 feet. For realtime implementation new edge detectors were developed for saving processing time and electronic hardware.
This short paper reexamines a direct-methods algorithm suggested by Negahdahripour and Hornthat finds the focus of expansion (FOE) from translational motion imagery. We propose an practical extension of the algorithm characterize it specifically for a VLSI implementation discuss when it will converge to the correct solution and demonstrate it on real images (both synthetic and natural). The method does not require a solution to the correspondence problem or involve estimation of optical flow.
Any niovernent of a niobile robot increases the uncertainty in the precise position of the robot. Sensor itteasurernents of the environnent gathered on board the niobile robot help in reducing this uncertainty. The uncertainty in robot position niay be eiabedded in a probability density function which we refer to as the robot location density. This paper presents iiiethods of expressing this density in terms of sensor measurement characteristics. Vision and range data provide information about the position of the robot. We show how this information may be incorporated into the location density. The probability mass in a region corresponds to the likelihood of the robot being in that region. Sensor data can effect this likelihood. For example an ultrasonic range system may provide information that the robot is close to a region boundary. The probability mass in the inner regions would thus be decreased when this range data is used to update the location density. Motion of the robot also influences the a posteriori location density. A collision event or the absence of this event both provide information about robot position. We show how the location density may be used for collision avoidance and robot position estimation (or robot self-location). Optimal methods are presented to determine the maximum likelihood estimate of the robot location in a known polygonal environment using range data. The methods may be used with or without compass
A sequence of image frames after processing them digitally was used to detect classify locate track and evaluate orientation and size of moving objects. The workspace area is limited and constrained to be perpendicular to the camera axis. The calibration for the real locations and real lengthes relative to the image plane ones is done taking these constraints in account. No a priori knowledge about path of motion texture and stationary or non-stationary components of the analyzed image is assumed. A sequence of processing stages to extract the moving objects separate them code thin and fill the disconnections of the extracted moving object boundary is developed. Recognition and classification of the object is performed using Walsh-Hadamard transformation of its detected boundary. Thus the amount of computations required for this purpose is reduced.
This paper presents a system which tracks multiple targets. The tracking of moving targets such as vehicles is emphasized as a possible application. The proposed system is comprised of three sequential modules: Background Identification (BI) Target Identification (TI) and Target Tracking (IT). BI detects the static background pixels from the sequential video inputs. Then TI identifies the targets based on the known background (from BI) and the current input. The key element is TT which uses a histogram-correlation tracking algorithm. This tracking module dynamically tracks multiple targets detected by TI module. Target information provided by the background and target identification modules is employed in target tracking. Specifically the histogram information of each target is used to estimate the target''s new location and a template correlation technique is used to pinpoint the location of target.
This paper presents a decision theoretic method of establishing the position of a mobile robot in a known environment. Boundaries of regions of varying light intensities may be extracted from visual data gathered by the rotation of a camera about the robot position. Some of these boundaries will represent corners of the region. The identification of these corners may further be enhanced using range data. The methods in this paper rely on the probability of viewing corners and on the probability density functions of the measured view angle of corners or the separation angle between corners. View angles are used when compass knowledge is available otherwise corner separation angles are used. The probability density functions of these corner angles are derived from the region geometry and prior knowledge (if any) of the robot position. The known environment is decomposed into visibility regions for sets of corners. The probability of viewing a set of corners depends on the likelihood of the robot being positioned in one of these visibility regions. An optimal decision procedure is established to identify the corner set (or visibility region) based on visual data from a circular scan about the robot position. With this information a least squares estimate of the robot position is derived.
The development of vehicular systems with automated control implies the creation of autonomous behavior which is directed by intelligent processors. For intelligent vehicle systems the goal-directed autonomy includes convoy following road following and obstacle avoidance. The first major step towards automated convoy following involves the successful execution of vehicle tracking. Vehicle-tracking problems consist of automatic steering and speed control of an unmanned vehicle which is following the motion of another vehicle traveling ahead of it. In this paper the experimental results of several vehicle-tracking tests are presented. The methodology discussed merges intelligent real-time control with techniques developed to solve problems in the system. Real-time system definitions are given as well as performance results of controlling an actual autonomous vehicle in real-time with speeds ranging up to 20 mph.
A simulator is designed for developing and testing navigation and control strategies for mobile robots which provides an animation of a robot and its environment on a computer screen. The environment is two-dimensional consisting of walls and objects which can be either stationary or moving. Sensors for non-contact range measurement can be attached to the robot or placed in stationary positions in its environment. The operation of the simulator has been verified in tests on a reactive distributed control system for a mobile robot and a more common model-based navigation approach. Implementation of the simulator is made more flexible by using an object-oriented approach.
The human eye ts potentially an ideal input device for teleoperator control applications due to the innate ability of the oculomotor system to rapidly shift the line of gaze in response to changing visual stimuli. An electro-optic system has been developed to automatically compute the eye position of a teleoperator and feedback onto the video display a reticle slaved to his eye movements. This ye slaved pointing system allows an operator with mimmal hand/eye coordina tion skills to designate targets within RPV imagery.
The goal of a mobile robot is not only to perform a large variety of task but also to adapt its behavior to the dynamic of the environment. Such a robot needs both reflexive and planning capabilities. Moreover we wish that a mobile robot be able to coordinate its activity with other mobile robots. This paper is intended to show that the integration of all these capabilities (reflexivity planning and coordination) is feasible in order to acheive this goal a new robot architecture is proposed. This architecture is composed of three levels: functional control and planner. Furthermore this architecture provides important features such as a progressive and programmable reactivity robustness additivity and versatility. This paper emphasizes the control level and how it is used by the planner level. In order to demonstrate the feasibility of such an approach a complete implementation of this architecture on our mobile robot including experiments are described.
Mobile robots usually suffer from a problem of continuous localization for position control feedback (indeed dead reckoning is often unreliable because of slippage and drift). One solution to this problem is to involve dedicated environment perception in the motion process. There is also a need for a very precise flexible and inexpensive in computation time path execution control algorithm. It must produce natural trajectories and not only straight lines and circles joined by stop points and must take into account all the physical constraints on speed and acceleration. All these requirements are handled by the " pilot" we present here.
In this paper we propose a new method for extracting vanishing points from real scenes which exhibits a linear computational complexity and good precision. This technique is very useful for camera calibration in photogrammetry the recovery of the motion of rigid objects and the reconstruction of 3D scenes. The linear computational complexity is due to the introduction of the polar space which permits the selection of the segments that converge to the same vanishing point before computing the vanishing point itself. Extensive experimentation on real images shows that vanishing points can be identified and located even in cluttered images.
Optical flow and its first order spatial derivatives can be computed by solving an overconstrained system of linear algebraic equations assuming that i) optical flow varies linearly over a small region of the image plane and ii) that the image brightness is stationary over time. Experimental results on sequences of synthetic and natural images show that the obtained optical flow is usually very similar to the true two-dimensional motion field.
This paper is concerned with the evaluation of motion parameters by observing four features in camera acquired images. The motion parameters of a moving object are studied using a static camera. The principles of distance invariance between rigid points and angular invariance between fixed lines are used in the method.
We describe an efficient system for recovering the 3-D motion and structure of entire polyhedra from an evolving image sequence. The suggested technique utilizes the image flow velocities in order to recover the 3-D parameters. We develop a method for estimating the image flow velocities and an algorithm for computing the 3-D parameters given two successive image frames. The solution is then improved by using a large number of image frames and exploiting the temporal coherence of 3-D motion. We use the ordinary differential which describe the parametric evolution in terms of the current motion/structure and the measurements in the image plane. The extended Kalman filter is then used to update the solution. The process is started by segmenting the entire scene into a number of planar surfaces and then applying the above technique to each surface under consideration the probable inconsistencies are then resolved.
A method is introducedfor implementing a 3-D vision system considering manufacturing cost constraints stringent reliability requirements and limited processing power. It entails transducer characterization and includes a scanning algorithm and low-complexity processing algorithms providing position shape and orientation information about an unknown surface. A highly modular approach subdivides the problem into separable and sequential processing modules gradually modifying a " data matrix" obtained from a scanning algorithm. Some models considered for planar and curved surface analysis are described. Theoretical and experimental background validating the implemented algorithms is presented.
The paper explores the collision recognition of two objects in both crisscross and revolution motions A mathematical model has been established based on the continuation theory. The objects of any shape may be regarded as being built of many 3siniplexes or their convex hulls. Therefore the collision problem of two object in motion can be reduced to the collision of two corresponding 3siinplexes on two respective objects accordingly. Thus an optimized algorithm is developed for collision avoidance which is suitable for computer control and eliminating the need for vision aid. With this algorithm computation time has been reduced significantly. This algorithm is applicable to the path planning of mobile robots And also is applicable to collision avoidance of the anthropomorphic arms grasping two complicated shaped objects. The algorithm is realized using LISP language on a VAX8350 minicomputer.
This paper discusses the integrated use of dynamic systems theory neural networks and evolutionary programming methods as a means of coping with complexity in automatic plan generation. Plan elements representing actions are mapped into phase-space and are examined for stability by searching for and identifying any " attractors. " A knowledge-based system for doing this is described. As a means of coping with unexpected environments modifications of plans are made by the planning system using an evolutionary programming method coupled with a neural network approach.
An autonomous mobile robot must be able to combine uncertain sensory information with prior knowledge of the world. Moreover these operations have to be performed fast enough to be able to react to the changes in the world. This paper presents a model-driven system for a real-time indoor mobile robot. As the robot is constantly in motion information from an Environment Model is used to anticipate information-rich features and to direct selective sensing. Uncertain sensor information is combined with prior World Model knowledge to reduce uncertainty and the remaining uncertainty is directly represented by flexible ranges of values. We present a hall-following robot based on this system which exhibits real-time navigation performance. It does this despite primitive and relatively slow sensing motor control and communications capabilities. This system combines sensing action and cognition which are the major building blocks for any autonomous system.
This paper describes a scheme for plan-behavior interaction in a reactive robot for navigating in an indoor environment. The robot uses very high level plans called " scripts" which resemble the symbolic sequence of instructions that a person would give when asked for directions. The robot is able to execute scripts because of a close correspondence between the script instructions and the four basic abilities hallway traversing (e. g. wall following) landmark recognition place recognition and junction branch selection. The ability invoked in an instruction can often be achieved by more than one of the behaviors associated with that ability. For example ''move down a hallway'' may be implemented using one of the following behaviors: follow wall follow midline follow targeL Thus there is a competition among the movement behaviors in executing the script instruction. Which movement behavior is actually activated at run-time is determined by its priority which is dynamically adjusted depending on the mission goals and the environmental and sensor conditions. Our robot architecture also allows subsumption of the " deliberate" behaviors invoked through a script by the activation of more reactive ones such as collision avoidance in response to changes in the environment. For execution each instruction in a script must be matched to a triple (initiation event behavior termination event). The sequence required by the script is achieved by making the termination event for one instruction
In this paper we formalize and implement a model of topological visual navigation in two-dimensional spaces. Unlike much of traditional quantitative visual navigation the emphasis throughout is on the methods and the efficiency of qualitative visual descriptions of objects and environments and on the methods and the efficiency of direction-giving by means of visual landmarks. We formalize three domainsthe world itself the map-maker''s view of it and the navigator''s experience of itand the concepts of custom maps and landmarks. We specify for a simplified navigator (the " level helicopter" ) the several ways in which visual landmarks can be chosen depending on which of several costs (sensor distance or communication) should be minimized. We show that paths minimizing one measure can make others arbitrarily complex the algorithm for selecting the path is based on a form of Dijkstra''s algorithm and therefore automatically generates intelligent navigator overshooting and backtracking. We implement using an armheld camera such a navigator and detail its basic seek-and-adjust behaviors as it follows visual highways (or departs from them) to reach a goal. Seeking is based on topology and adjusting is based on symmetry there are essentially no quantitative measures. We describe under what circumstances its environment is visually difficult and perceptively shadowed and describe how errors in path-following impact landmark selection. Since visual landmark selection and direction-giving are in general NP-complete and rely on the nearly intractable
This paper describes the strucwre implementation and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation path specification and hacking human interfaces fast communication and multiple client support The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Naviab autonomous vehicle. In addition performance results from positioning and tracking systems are reported and analyzed.
In this paper we address the problem of operator/system communication. In particular we discuss the issue of efficient and adaptive transmission mechanisms over possible physical links. We develop a tool for making decisions regarding the flow of control sequences and data from and to the operator. The issue of compression is discussed in details a decision box and an optimizing tool for finding the appropriate thresholds for a decision are developed. Physical parameters like the data rate bandwidth of the communication medium distance between the operator and the system baud rate levels of discretization signal to noise ratio and propagation speed of the signal are taken into consideration while developing our decision system. Theoretical analysis is performed to develop mathematical models for the optimization algorithm. Simulation models are also developed for testing both the optimization and the decision tool box.
We present a simulation-based model of sonar range sensing for robot navigation that accounts for multiple reflections of the sonar signal between transmission and reception. This gives more realistic results than previous models. The approach is to follow sample signal rays from the transmitter through reflections off walls and diffraction at corners to their possible return to the receiver. Parameters of the model are frequency minimum and maximum range and signal detection threshold (relative to emitted signal strength after linear gain compensation) . The results of Kuc and Siegel'' are employed to determine transducer impulse response and the strength of corner signals due to diffraction relative to reflected signals.
This paper discusses the design of a four-legged robot which has a laser mounted on each leg which provides a real-time high-definition range image of the ground surroundings about each leg. A means of terrain classification using fractals and methods for the automatic extraction of information from the range images having implications for footfall placement are provided.
To increase our understanding of planetary rover class vehicles Martin Marietta has embarked on the design and construction of a wheeled planetary rover testbed. The vehicle utilizes wheels for mobility and compliments the research being performed with our legged planetary rover the Walking Beam. Designed. for simplicity and reliability the Wheeled Planetary Rover testbed (WPR) is approximately the same size as the Walking Beam (152 cm long). A four wheel all-wheel drive system provides the WPR with a representative mobility capability that will allow for realistic planetary mission simulations.
A program of research is presented that embraces teleoperator and automatic navigational control of freelyflying satellite robots. Current research goals include developing visual operator interfaces for improved vehicle teleoperation studying the effects of different visual interface system designs and achieving autonomous visionbased vehicle navigation and control. This research program combines virtual-environment teleoperation studies with neutral-buoyancy experiments using a space robot simulator vehicle that is currently under development. Visualinterface design variables include monoscopic versus stereoscopic displays and cameras helmet-mounted versus panelmounted display monitors head-tracking versus fixed or manually steerable remote cameras and provision of vehiclefixed visual cues or markers in the remote scene for improved sensing of vehicle position orientation and motion. Autonomous-control work concentrates on developing the neutral-buoyancy vehicle and the vision-based sensing systems that will support automatic vehicle control experiments.
The development of control architectures for mobile systems is typically a task undertaken with each new application. These architectures address different operational needs and tend to be difficult to adapt to more than the problem at hand. The development of a flexible and extendible control system with evolutionary growth potential for use on mobile robots will help alleviate these problems and if made widely available will promote standardization and cornpatibility among systems throughout the industry. The Modular Robotic Architecture (MRA) is a generic control systern that meets the above needs by providing developers with a standard set of software hardware tools that can be used to design modular robots (MODBOTs) with nearly unlimited growth potential. The MODBOT itself is a generic creature that must be customized by the developer for a particular application. The MRA facilitates customization of the MODBOT by providing sensor actuator and processing modules that can be configured in almost any manner as demanded by the application. The Mobile Security Robot (MOSER) is an instance of a MODBOT that is being developed using the MRA. Navigational Sonar Module RF Link Control Station Module hR Link Detection Module Near hR Proximi Sensor Module Fluxgate Compass and Rate Gyro Collision Avoidance Sonar Module Figure 1. Remote platform module configuration of the Mobile Security Robot (MOSER). Acoustical Detection Array Stereoscopic Pan and Tilt Module High Level Processing Module Mobile Base 566
Robotic vehicles for lunar and Mars exploration will carry an array of complex instruments requiring real-time data interpretation and fusion. The system described here uses hierarchical multiresolution analysis of visible and multispectral images to extract information on mineral composition, texture and object shape. This information is used to characterize the site geology and choose interesting samples for acquisition. Neural networks are employed for many data analysis steps. A decision tree progressively integrates information from multiple instruments and performs goal-driven decision making. The system is designed to incorporate more instruments and data types as they become available.
Key system and supporting technology issues associated with the design and development of effective general-purpose unmanned mobile robots for operation in unstructured outdoor environments are examined within the context of an advanced telerobotic mobile system developed by the Naval Ocean Systems Center (NOSC) -- The Unmanned Ground Vehicle Program Teleoperated Vehicle (TOy).
The identification and sensing of nongeometric properties of the surface which affect the safety and hazard avoidance capabilities of planetary rovers are addressed. Topics discussed include terrain features of the lunar and Martian surfaces, rover mobility, and sensors for extracting information on the nongeometric hazards facing rover. Subsurface radar, multispectral imaging, and a ground truth sensor are considered to be the most effective sensors for determining surface properties and detecting nongeometric hazards.