The autonomous navigation of a mobile vehicle can be described as the task it undertakes to move itself in the environment through a series of positions based on information based on information gathered by its sensors. In order to accomplish this task, the vehicle has to cope with two main subtasks namely obstacle avoidance and self localization. The latter implies in the ability to determine its position and orientation with respect to the environment. This work describes a simple but efficient method that performs pose estimation for a mobile vehicle based on visual information from artificial landmarks using a sequence of frames from an uncalibrated camera. The landmark is segmented from image sequences and the vehicle's localization is computed using landmark geometric properties and vehicle's motion vector. This methodology can be easily extended to be used by different types of mobile agents. One of the key advantages is that it is computationally, efficient making it suitable for real time navigation. Experiments conducted with a Nomad 200 mobile robot equipped with a color camera systems have shown the method to be repeatable and very robust to noise. Visual measurements were compared with readings for other on-board sensors such as ultrasound with excellent consistency.