25 October 2004 Bayesian stereo: 3D vision designed for sensor fusion
Author Affiliations +
Proceedings Volume 5608, Intelligent Robots and Computer Vision XXII: Algorithms, Techniques, and Active Vision; (2004); doi: 10.1117/12.571537
Event: Optics East, 2004, Philadelphia, Pennsylvania, United States
Classical stereo algorithms attempt to reconstruct 3D models of a scene by matching points between two images. Finding points that match is an important part of this process, and point matches are most commonly chosen as the minimum of an error function based on color or local texture. Here we motivate a probabilistic approach to this point matching problem, and provide an experimental design for the empirical measurement of the color matching error for corresponding points. We use this prior in a Bayesian scene reconstruction example, and show that we get better 3D reconstruction by not committing to a specific pixel match early in the visual processing. This allows a calibrated stereo camera to be considered as a probabilistic volume sensor -- which allows it to be more easily integrated with scene structure measurements from other kinds of sensors.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
John Larson, Robert B. Pless, "Bayesian stereo: 3D vision designed for sensor fusion", Proc. SPIE 5608, Intelligent Robots and Computer Vision XXII: Algorithms, Techniques, and Active Vision, (25 October 2004); doi: 10.1117/12.571537; https://doi.org/10.1117/12.571537



3D modeling

Error analysis


3D vision

Sensor fusion


Color correction using 3D multi-view geometry
Proceedings of SPIE (February 08 2015)
Head pose free 3D gaze estimation using RGB-D camera
Proceedings of SPIE (February 08 2017)
Method of 3D road model construction based on sensor fusion
Proceedings of SPIE (October 06 1994)
Revisit to omni rig sensors What can be done...
Proceedings of SPIE (September 25 2003)

Back to Top