21 May 2004 Temporally consistent virtual camera generation from stereo image sequences
Author Affiliations +
The recent emergence of auto-stereoscopic 3D viewing technologies has increased demand for the creation of 3D video content. A range of glasses-free multi-viewer screens have been developed that require as many as 9 views generated for each frame of video. This presents difficulties in both view generation and transmission bandwidth. This paper examines the use of stereo video capture as a means to generate multiple scene views via disparity analysis. A machine learning approach is applied to learn relationships between disparity generated depth information and source footage, and to generate depth information in a temporally smooth manner for both left and right eye image sequences. A view morphing approach to multiple view rendering is described which provides an excellent 3D effect on a range of glasses-free displays, while providing robustness to inaccurate stereo disparity calculations.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Simon R. Fox, Simon R. Fox, Julien Flack, Julien Flack, Juliang Shao, Juliang Shao, Phil Harman, Phil Harman, } "Temporally consistent virtual camera generation from stereo image sequences", Proc. SPIE 5291, Stereoscopic Displays and Virtual Reality Systems XI, (21 May 2004); doi: 10.1117/12.527895; https://doi.org/10.1117/12.527895


Research on gaze-based interaction to 3D display system
Proceedings of SPIE (October 17 2006)
Auto convergence for stereoscopic 3D cameras
Proceedings of SPIE (February 20 2012)
Enlargement of viewing freedom of reduced-view SMV display
Proceedings of SPIE (February 21 2012)
Perception of size and shape in stereoscopic 3D imagery
Proceedings of SPIE (February 25 2012)

Back to Top