19 January 2009 Depth-based 2D-3D combined scene images for 3D multiview displays
Author Affiliations +
Due to the limited display capacity of multiview/ automultiscopic 3D displays (and other 3D display methods which recreate lightfields), regions and objects and greater depths from the zero disparity plane appear aliased. One solution to this, namely prefiltering renders the scene visually very blurry. An alternative approach is proposed in this paper, wherein regions are large depths are identified in each view. The 3D scene points corresponding to these regions is rendered as 2D only. The rest of scene still retains parallax (hence the depth perception). The advantages are that both aliasing and blur are removed, and the resolution of such regions is greatly improved. A combination of the 2D and 3D visual cues still make the scene look realistic, and the relative depth information between objects in the scene is still preserved. Our method can prove to be particularly useful for the 3D video conference application, where the people in the conference will be shown as 3D objects, but the background will be displayed as a 2D object with high spatial resolution.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Vikas Ramachandra, Vikas Ramachandra, Matthias Zwicker, Matthias Zwicker, Truong Q. Nguyen, Truong Q. Nguyen, } "Depth-based 2D-3D combined scene images for 3D multiview displays", Proc. SPIE 7257, Visual Communications and Image Processing 2009, 72570I (19 January 2009); doi: 10.1117/12.805449; https://doi.org/10.1117/12.805449


Evaluation of vision training using 3D play game
Proceedings of SPIE (March 17 2015)
VR versus LF: towards the limitation-free 3D
Proceedings of SPIE (June 26 2017)
3D ground/air sensor common operating picture
Proceedings of SPIE (May 04 2018)
3D vision system assessment
Proceedings of SPIE (February 17 2009)

Back to Top