The topic of this conference is how various sensors can be used together to support robot mobility and other related tasks. The support that is needed - what you want to use the sensors to create - is an understanding of the layout of the environment, the nature of its (other) mobile elements, etc., so that the robot can at least navigate and avoid collisions. This "understanding" may take the form of an explicit, symbolic representation (model) whose symbols can be manipulated by a planner and eventually used to influence/direct robot motion. We demonstrate, however, that it is possible and sometimes desirable to bypass this representation phase, allowing the sensors to directly influence robot behavior (i.e., allowing the "understanding" to be a procedural one). And we show that one can achieve, using this approach, effective robot motion. We present results obtained on a real robot that procedurally integrates odometry, sonar, and vision - fusing not only different sensors but also data from the same sensors over time - in real-time navigation and exploration.