The VISIONS research environment at the University of Massachusetts provides an integrated system for the interpretation of visual data. To provide a testbed for many of the algorithms developed within this framework, a mobile robot has been acquired. A multi-level representation and the accompanying architecture used to support multi-sensor navigation (pre-dominantly visual) are described. A hybrid vertex-graph free-space representation (meadow map) based upon the decomposition of free space into convex regions capable for use in both indoor and limited outdoor navigation is discussed. Of particular interest is the capability to handle multiple terrain types. A hierarchical path planner that utilizes the data avail-able in the above representational scheme is described. An overview of the UMASS mobile robot architecture (AuRA) is presented.