A technique for dynamic position correction using image features as virtual beacons is described. An algorithm which acquires new features, computes robot position correction vectors from tracked features, and maintains feature reliability statistics is detailed. The algorithm minimizes the use of matching to reduce computational expense and increase robustness. The principal inputs to the algorithm are the relative bearings observed between feature pairs. Unlike stereo-vision techniques it does not compute explicit feature range estimates. Unlike the bulk of vision based navigation methods, an accurate position estimate results from the integration of a large number of correction vectors derived from the low level analysis of many images. A control architecture for an autonomous mobile robot which makes use of this positioning technique is discussed. The general navigation problem of positioning, model building, path finding, and path execution is decomposed into local and global navigation. Local navigation is independent of high level representations, it is concerned with the immediately perceivable environment and deals with the bulk of the real-time constraints. Methods for coupling local and global navigation are explored. Simulation results showing the behavior of such a control system are presented. The motivation behind this research is the belief that a substantial subset of the navigation problem can be solved using only information obtained during early vision processing. This technique is expected to be more computationally tractable than methods based on optical flow field determination and more accurate than landmark based navigation methods.