The University of Massachusetts Mobile perception Laboratory (MPL) is an autonomous outdoor vehicle, similar to CMU's NAVLAB II, that was built by UMass as an experimental testbed for high-level computer vision. Our goal in developing MPL is the integration of many of the vision algorithms developed over the past decade, at UMass and elsewhere, into a system which is capable of exhibiting useful goal-oriented autonomous navigation in real- world scenarios. To accomplish such high-level tasks, MPL has to acquire many types of information about its environment, not all of it image related. Rather than performing sensor level fusion of the image data, we have focused on the types of information required and the representations needed to express them. The problem, as we see it, is to integrate the information needed for a specific task (using task-specific and general constraints) by combining the appropriate representations at the appropriate time, whether they are derived from different sensors or from different interpretation techniques applied to a single sensor. We refer to this task as information fusion, rather than sensor fusion.