Optical zoom lenses mounted on a stereo color camera magnify each left and right two-dimensional (2-D) image increasing focal length. However, without adjusting the baseline distance, the optical zoom distorts three-dimensional (3-D) perception because the optical zoom magnifies projected 2-D images not an original 3-D object. We propose a computational approach to stereoscopic zoom that magnifies stereo images without 3-D distortion. We computationally manipulate the baseline distance and convergence angle between left and right images by synthesizing novel view stereo images based on the depth information. We suggest a volume-predicted bidirectional occlusion inpainting method for novel view synthesis. Original color image is warped to the novel view determined by the adjusted baseline and convergence angle. Rear volume of each foreground object is predicted, and the foreground portion of each occlusion region is identified. Then we apply our inpainting method to fill in the foreground and background respectively. Experimental results show that the proposed inpainting method removes the cardboard effect that significantly decreases the perceptual quality of synthesized novel view image but has never been addressed in the literature. Finally, 3-D object presented by stereo images is magnified by the proposed stereoscopic zoom method without 3-D distortion.