3 April 2012 Computational stereoscopic zoom
Author Affiliations +
Abstract
Optical zoom lenses mounted on a stereo color camera magnify each left and right two-dimensional (2-D) image increasing focal length. However, without adjusting the baseline distance, the optical zoom distorts three-dimensional (3-D) perception because the optical zoom magnifies projected 2-D images not an original 3-D object. We propose a computational approach to stereoscopic zoom that magnifies stereo images without 3-D distortion. We computationally manipulate the baseline distance and convergence angle between left and right images by synthesizing novel view stereo images based on the depth information. We suggest a volume-predicted bidirectional occlusion inpainting method for novel view synthesis. Original color image is warped to the novel view determined by the adjusted baseline and convergence angle. Rear volume of each foreground object is predicted, and the foreground portion of each occlusion region is identified. Then we apply our inpainting method to fill in the foreground and background respectively. Experimental results show that the proposed inpainting method removes the cardboard effect that significantly decreases the perceptual quality of synthesized novel view image but has never been addressed in the literature. Finally, 3-D object presented by stereo images is magnified by the proposed stereoscopic zoom method without 3-D distortion.
© 2012 Society of Photo-Optical Instrumentation Engineers (SPIE)
Seungkyu Lee, Seungkyu Lee, Hwasup Lim, Hwasup Lim, James D. Kim, James D. Kim, Chang-Yeong Kim, Chang-Yeong Kim, } "Computational stereoscopic zoom," Optical Engineering 51(3), 037008 (3 April 2012). https://doi.org/10.1117/1.OE.51.3.037008 . Submission:
JOURNAL ARTICLE
8 PAGES


SHARE
RELATED CONTENT

3D image processing architecture for camera phones
Proceedings of SPIE (January 27 2011)
Unassisted 3D camera calibration
Proceedings of SPIE (February 23 2012)

Back to Top