This paper addresses the design, design method, test platform, and test results of an algorithm used in
autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its
2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of
undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team
has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for
IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned
from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm
employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent
IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this
autonomous robotic system, beginning with the design process and ending with test results for both simulation and real
This paper presents an algorithm for solving three challenges of autonomous navigation: sensor signal processing, sensor integration, and path-finding. The algorithm organizes these challenges into three steps. The first step involves converting the raw data from each sensor to a form suitable for real-time processing. Emphasis in the first step is on image processing. In the second step, the processed data from all sensors is integrated into a single map. Using this map as input, during the third step the algorithm calculates a goal and finds a suitable path from robot to the goal. The method presented in this paper completes these steps in this order and the steps repeat indefinitely. The robotic platform designed for testing the algorithm is a six-wheel mid-wheel drive system using differential steering. The robot, called Anassa II, has an electric wheelchair base and a custom-built top and it is designed to participate in the Intelligent Ground Vehicle Competition (IGVC). The sensors consist of a laser scanner, a video camera, a Differential Global Positioning System (DGPS) receiver, a digital compass, and two wheel encoders. Since many intelligent vehicles have similar sensors, the approach presented here is general enough for many types of autonomous mobile robots.
This paper presents a method to integrate non-stereoscopic vision information with laser distance measurements for Autonomous Ground Robotic Vehicles (AGRV). The method assumes a horizontally-mounted Laser Measurement System (LMS) sweeping 180 degrees in front from right to left every one second, and a video camera mounted five feet high pointing to the front and down at 45 degrees to the horizontal. The LMS gives highly accurate obstacle position measurements in a two-dimensional plane whereas the vision system gives limited and not-so-accurate information on obstacle positions in three dimensions. The vision system can also see contrasts between ground markings. Many AGRVs have similar sensors in similar arrangements. The method presented here is general enough for many types of distance measurements and cameras and lenses. Since the data from these two sensors are in radically different formats, AGRVs need a scheme to combine this data into a common format so that the data can be compared and correlated. Having a successful integration method allows the AGRV to make smart path-finding navigation decisions. Integrating these two sensors is one of the challenges for AGRVs that use this approach. The method presented in this paper employs a geometrical approach to combine the two data sets in real time. Tests, accomplished in simulation as well as on an actual AGRV, show excellent results.