As part of the Raptor system developed for DARPA's PerceptOR program, three path planning methods have been integrated together in the framework of a command-arbitration based architecture. These methods combine reactive and deliberative elements, performing path planning within different planning horizons. Short range path planning (< 10 m) is done by a module called OAradials. OAradials is purely reactive, evaluating arcs corresponding to possible steering commands for the proximity of discrete obstacles, abrupt elevation changes, and unsafe slope conditions. Medium range path planning (<30 m) is performed by a module called Biased Random Trees - Follow Path (BRT-FP). Based on LaValle and Kuffner's rapidly exploring random trees planning algorithm, BRT-FP continuously evaluates the local terrain map in order to generate a good path that advances the robot towards the next intermediate waypoint in a user-specified plan. A pure-pursuit control algorithm generates candidate steering commands intended to keep the robot on the generated path. Long range path planning is done by the Dynamic Planner (DPlanner) using Stentz' D* algorithm. Use of D* allows efficient exploitation of prior terrain data and dynamic replanning as terrain is explored. Outputs from DPlanner generate intermediate goal points that are fed to the BRT-FP planner. A command-level arbitration scheme selects steering commands based on the weighted sum of the steering preferences generated by the OAradials and BRT-FP path planning behaviors. This system has been implemented on an ATV platform that has been actuated for autonomous operation, and tested on realistic cross-country terrain in the context of the PerceptOR program.
Autonomous and semi-autonomous ground robots exploring urban environments need the ability to detect various types of fences that are obstacles to mobility. Visual detection of wire fences is challenging due to the small size of the wire forming the fence and the presence of multiple unknown natural and/or man-made backgrounds visible through the structure of the fence. A deformable template based algorithm has been developed to visually identify the periodic structure of chain link fences in typical outdoor scenes. The algorithm extracts edge points from the image using the Prewitt
gradient operator and a histogram based thresholding method. The fence is modeled as two sets of regularly spaced parallel lines. Each of these sets of lines is parameterized by orientation, line spacing, and location of the left-most line within a specified Region Of Interest. A search in this parameter space finds the template which minimizes an energy function based on proximity of lines in the deformed template to edge points in the images. The algorithm performs well even in the presence of clutter edges from background textures in the scene. Modification of the template to account
for effects of perspective distortion when viewing fences from off-normal angles is discussed.
Experiments with the LOIS (Likelihood Of Image Shape) Lane detector have demonstrated that the use of a deformable template approach allows robust detection of lane boundaries in visual images. The same algorithm has been applied to detect pavement edges in millimeter wave radar images. In addition to ground vehicle applications involving lane sensing, the algorithm is applicable to airplane applications for tracking runways in either visual or radar data. Previous work on LOIS has focused on the problem of detecting lane edges in individual frames. This paper describes extensions to the LOIS algorithm which allow it to smoothly track lane edges through maneuvers such as lane changes.
The problem of determining the offset to lane markings is an important one in designing vision-based automotive safety systems that operate on structured road environments. The lane offset information is critical for lateral control of the automobile. In this paper, we investigate the use of this information for an autonomous robot's lane-keeping task. We employ a deformable template-based algorithm for determining the location of lane markings in visual images taken from a side-looking camera. The matching criteria involves a modification of the standard signal-to-noise (SNR) ratio-based matched filtering criteria. A KL-type color transformation is used for transforming the RGB channels of the given image onto a composite color channel, in order to eliminate some of the noise. The standard perspective transformation is used for transforming the offset information from image coordinates onto ground coordinates. The resulting algorithm, named STARLITE is robust to shadows, specular reflections, road cracks, etc. Experimental results are provided to illustrate the performance of STARLITE and compare its performance to the AURORA algorithm, and the SNR-based matched filter.
The goal of ARPA's Unmanned Ground Vehicle project is to demonstrate the use of small teams of cooperating autonomous robots (2 - 4 vehicles) to carry out military tasks in an outdoor environment. The role of the University of Michigan within the project focuses on aspects of mission planning, assimilation of information provided by multiple agents, and the interaction between planning and perception. The two aspects of this work related to sensor fusion are planning observation points to maximally reduce hypothesis uncertainty, and information sharing in multivehicle scenarios to reduce the amount of perception required. Observation point planning combines the system's current knowledge about an object with the uncertainty model used to characterize observations for data fusion in order to select optimal points for additional observations. Information sharing selects those detected features in the environment which are predicted to be most useful to other cooperating vehicles in the future, adding them to the multiagent system's model of the environment while ignoring less useful features.
Autonomous road following is a domain which spans a range of complexity from poorly defined unmarked dirt roads to well defined well marked highly struc-. tured highways. The YARF system (for Yet Another Road Follower) is designed to operate in the middle of this range of complexity driving on urban streets. Our research program has focused on the use of feature- and situation-specific segmentation techniques driven by an explicit model of the appearance and geometry of the road features in the environment. We report results in robust detection of white and yellow painted stripes fitting a road model to detected feature locations to determine vehicle position and local road geometry and automatic location of road features in an initial image. We also describe our planned extensions to include intersection navigation.