4D/RCS is a hierarchical architecture designed for the control of intelligent systems. One of the main areas
that 4D/RCS has been applied to recently is the control of autonomous vehicles. To accomplish this, a hierarchical
decomposition of on-road driving activities has been performed which has resulted in implementation
of 4D/RCS tailored towards this application. This implementation has seven layers and ranges from a journey
manager which determines the order of the places you wish to drive to, through a destination manager which
provides turn-by-turn directions on how to get to a destination, through a route segment, drive behavior, elemental
maneuver, goal path trajectory, and then finally to servo controllers.
In this paper, we show, within the 4D/RCS architecture, how knowledge-driven top-down symbolic representations
combined with low-level bottom-up tasks can synergistically provide valuable information for on-road
driving better than what is possible with either of them alone. We demonstrate these ideas using field data
obtained from an Unmanned Ground Vehicle (UGV) traversing urban on-road environments.
This paper describes and evaluates a vision system that accurately segments unstructured, non-homogeneous roads of arbitrary shape under various lighting conditions. The idea behind the road following algorithm is the segmentation of road from background through the use of color models. Data are collected from a video camera mounted on a moving vehicle. In each frame, color models of the road and background are constructed. The color models are used to calculate the probability that each pixel in a frame is a member of the road class. Temporal fusion of these road probabilities helps to stabilize the models, resulting in a probability map that can be thresholded to determine areas of road and non-road. Performance evaluation follows the approach described in Hong et al1. We evaluate the algorithm's performance with annotated frames of video data. This allows us to compute the false positive and false negative ratios. False positives refer to non-road areas in the image that were classified by the system as road, while false negatives refer to road areas classified as non-road. We use the sum of false positives and false negatives as an overall classification error calculated for each frame of the video sequence. After the error is calculated for each frame, we determine the statistics of the classification error throughout the whole video sequence. The overall classification error per frame allows us to compare the performance of several algorithms on the same frame, and we can analyze the overall performance of individual algorithms using their classification statistics.
We describe a methodology for evaluating algorithms to provide quantitative information about how well road detection and road following algorithms perform. The approach relies on generating a set of standard data sets annotated with ground truth. We evaluate the algorithms used to detect roads by comparing the output of the algorithms with ground truth, which we obtain by having humans annotate the data sets used to test the algorithms. Ground truth annotations are acquired from more than one person to reduce systematic errors. Results are quantified by looking at false positive and false negative regions of the image sequences when compared with the ground truth. We describe the evaluation of a number of variants of a road detection system based on neural networks.