9 April 2007 Position estimation and driving of an autonomous vehicle by monocular vision
Author Affiliations +
Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.
© (2007) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jay C. Hanan, Jay C. Hanan, Pavan Kayathi, Pavan Kayathi, Casey L. Hughlett, Casey L. Hughlett, } "Position estimation and driving of an autonomous vehicle by monocular vision", Proc. SPIE 6574, Optical Pattern Recognition XVIII, 65740K (9 April 2007); doi: 10.1117/12.729607; https://doi.org/10.1117/12.729607

Back to Top