The two most popular methods used by current researchers for tackling the problem of autonomous mobile robot map building and navigation are: (1) to use model-based methods to construct 3D models of the environment and, (2) to use view-based, or topological methods, in which the environment is represented in some non-topographical manner which the robot can typically relate directly to its sensors. Model-based methods have been shown to be good methods for local navigation, yet unreliable and computationally intensive methods for map building. View-based methods have been shown to be good for map building, however poor at local navigation.
This paper shows the results of recent research which yields a solution which utilises the benifits of each of these two methods, to overcome the inadequacies of the other. This solution results in a new approach to map building and path planning, and one which provides a robust and extensible solution. In particular the solution is largely unaffected by the size of complexity of the environmnet being learnt. It thus forms a key milestone in the evolution of MR autonomy. It is shown that an AMR can successfully use this method to navigate around an unknown environment. Learning is accomplished through segmenting the environment into discrete ‘locations’, based upon visual similarity. Navigation is achieved through storing minimal distance-to-visual-feature information at all visually distinct locations. No self-consistent geometric map is stored - yet the geometric information stored can be used for navigation between visually distinct locations and for local navigation tasks such as obstacle avoidance.