In this paper a new method is proposed to control a vision-based robot in large navigation spaces. In this case, visual features observed by an on-board camera can change drastically or even disappear completely between the initial image, as seen at the beginning of a task, and the final image, as seen at the desired position of the robot. These features are therefore not suffcient for controlling the entire motion of the robotic system from beginning to end. This problem requires a more complete definition and representation of the navigation space. This can be achieved by a topological representation, where the environment is directly defined in the sensor space by a data-base of images. In our approach, this data-base is acquired during an offline learning step. An image retrieval method then indexes and matches a request image, given by the camera, to the closest view within the data-base. In this way, an image path is extracted from the database to link the initial and
desired images providing enough information to control the robot. The central point of this paper is focused on the closed-loop control law that drives the robot to its desired position using this image path. The method proposed does not require either a global reconstruction or a temporal planning step. Furthermore, the robot is not obliged to converge directly upon each image waypoint but chooses automatically a better trajectory. The visual servoing control law designed uses specific features which ensure that the robot navigates within the
visibility path. Experimental simulations are given to show the effectiveness of this method for controlling the motion of a camera in three-dimensional environments (free-flying camera, or camera moving on a plane).