This paper will discuss the approach to autonomous navigation used by "Q," an unmanned ground vehicle designed by
the Trinity College Robot Study Team to participate in the Intelligent Ground Vehicle Competition (IGVC). For the
2011 competition, Q's intelligence was upgraded in several different areas, resulting in a more robust decision-making
process and a more reliable system. In 2010-2011, the software of Q was modified to operate in a modular parallel
manner, with all subtasks (including motor control, data acquisition from sensors, image processing, and intelligence)
running simultaneously in separate software processes using the National Instruments (NI) LabVIEW programming
language. This eliminated processor bottlenecks and increased flexibility in the software architecture. Though overall
throughput was increased, the long runtime of the image processing process (150 ms) reduced the precision of Q's realtime
decisions. Q had slow reaction times to obstacles detected only by its cameras, such as white lines, and was limited
to slow speeds on the course. To address this issue, the image processing software was simplified and also pipelined to
increase the image processing throughput and minimize the robot's reaction times. The vision software was also
modified to detect differences in the texture of the ground, so that specific surfaces (such as ramps and sand pits) could
be identified. While previous iterations of Q failed to detect white lines that were not on a grassy surface, this new
software allowed Q to dynamically alter its image processing state so that appropriate thresholds could be applied to
detect white lines in changing conditions. In order to maintain an acceptable target heading, a path history algorithm was
used to deal with local obstacle fields and GPS waypoints were added to provide a global target heading. These
modifications resulted in Q placing 5th in the autonomous challenge and 4th in the navigation challenge at IGVC.
|