To use a world model, a mobile robot must be able to determine its own position in the world. To support truly autonomous navigation, I present MARVEL, a system that builds and maintains its own models of world locations and uses these models to recognize its world position from stereo vision input. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives. MARVEL also fits into a world modeling system under development.