NASA scenarios for lunar and planetary missions include robotic vehicles that function in both teleoperated and semi-autonomous modes. Under teleoperation, on-board stereo cameras may provide 3-D scene information to human operators via stereographic displays; likewise, under semi-autonomy, machine stereo vision may provide 3-D information for obstacle avoidance. In the past, the slow speed of machine stereo vision systems has posed a hurdle to the semi- autonomous scenario; however, recent work at JPL and other laboratories has produced stereo vision systems with high reliability and near real-time performance for low-resolution image pairs. In particular, JPL has taken a significant step by achieving the first autonomous, cross- country robotic traverses (of up to 100 meters to use stereo vision, with all computing on- board the vehicle. This paper describes the stereo vision system, including the underlying statistical model and the details of the implementation. The statistical and algorithmic aspects employ random field models of the disparity map, Bayesian formulations of single-scale matching, and area-based image comparisons. The implementations builds Laplacian image pyramids and produces disparity maps from the 60X64 level of the pyramids at rates of up to two seconds per image pair. All vision processing is done on one 68020 augmented with Datacube image processing boards. The author argues that the overall approach provides a unifying paradigm for practical, domain-independent stereo ranging.