This paper presents an algorithm for solving three challenges of autonomous navigation: sensor signal processing, sensor integration, and path-finding. The algorithm organizes these challenges into three steps. The first step involves converting the raw data from each sensor to a form suitable for real-time processing. Emphasis in the first step is on image processing. In the second step, the processed data from all sensors is integrated into a single map. Using this map as input, during the third step the algorithm calculates a goal and finds a suitable path from robot to the goal. The method presented in this paper completes these steps in this order and the steps repeat indefinitely. The robotic platform designed for testing the algorithm is a six-wheel mid-wheel drive system using differential steering. The robot, called Anassa II, has an electric wheelchair base and a custom-built top and it is designed to participate in the Intelligent Ground Vehicle Competition (IGVC). The sensors consist of a laser scanner, a video camera, a Differential Global Positioning System (DGPS) receiver, a digital compass, and two wheel encoders. Since many intelligent vehicles have similar sensors, the approach presented here is general enough for many types of autonomous mobile robots.
This paper presents a method to integrate non-stereoscopic vision information with laser distance measurements for Autonomous Ground Robotic Vehicles (AGRV). The method assumes a horizontally-mounted Laser Measurement System (LMS) sweeping 180 degrees in front from right to left every one second, and a video camera mounted five feet high pointing to the front and down at 45 degrees to the horizontal. The LMS gives highly accurate obstacle position measurements in a two-dimensional plane whereas the vision system gives limited and not-so-accurate information on obstacle positions in three dimensions. The vision system can also see contrasts between ground markings. Many AGRVs have similar sensors in similar arrangements. The method presented here is general enough for many types of distance measurements and cameras and lenses. Since the data from these two sensors are in radically different formats, AGRVs need a scheme to combine this data into a common format so that the data can be compared and correlated. Having a successful integration method allows the AGRV to make smart path-finding navigation decisions. Integrating these two sensors is one of the challenges for AGRVs that use this approach. The method presented in this paper employs a geometrical approach to combine the two data sets in real time. Tests, accomplished in simulation as well as on an actual AGRV, show excellent results.