Commercially available time-of-flight cameras illuminate the scene with amplitude-modulated infrared light signals and detect their reflections to provide per-pixel depth maps in real time. These cameras, however, suffer from an inherent problem called phase wrapping, which occurs due to the modular ambiguity in the phase delay measurement. As a result, the measured distance to a scene point becomes much shorter than its actual distance if the point is farther than a certain maximum range. There have been multifrequency phase unwrapping methods, which recover the actual distance values by exploiting the consistency in the disambiguated depth values across depth maps of the same scene, acquired at different modulation frequencies. For robust and accurate estimation against noise, a cost function is built that evolves over time to enforce both the interframe depth consistency and the intraframe depth continuity. As demonstrated in the experiments with real scenes, the proposed method correctly disambiguates the depth measurements, extending the maximum range restricted by the modulation frequency.
Optical zoom lenses mounted on a stereo color camera magnify each left and right two-dimensional (2-D) image increasing focal length. However, without adjusting the baseline distance, the optical zoom distorts three-dimensional (3-D) perception because the optical zoom magnifies projected 2-D images not an original 3-D object. We propose a computational approach to stereoscopic zoom that magnifies stereo images without 3-D distortion. We computationally manipulate the baseline distance and convergence angle between left and right images by synthesizing novel view stereo images based on the depth information. We suggest a volume-predicted bidirectional occlusion inpainting method for novel view synthesis. Original color image is warped to the novel view determined by the adjusted baseline and convergence angle. Rear volume of each foreground object is predicted, and the foreground portion of each occlusion region is identified. Then we apply our inpainting method to fill in the foreground and background respectively. Experimental results show that the proposed inpainting method removes the cardboard effect that significantly decreases the perceptual quality of synthesized novel view image but has never been addressed in the literature. Finally, 3-D object presented by stereo images is magnified by the proposed stereoscopic zoom method without 3-D distortion.
Recently a Time-of-Flight 2D/3D image sensor has been developed, which is able to capture a perfectly aligned
pair of a color and a depth image. To increase the sensitivity to infrared light, the sensor electrically combines
multiple adjacent pixels into a depth pixel at the expense of depth image resolution. To restore the resolution
we propose a depth image super-resolution method that uses a high-resolution color image aligned with an input
depth image. In the first part of our method, the input depth image is interpolated into the scale of the color
image, and our discrete optimization converts the interpolated depth image into a high-resolution disparity image,
whose discontinuities precisely coincide with object boundaries. Subsequently, a discontinuity-preserving filter is
applied to the interpolated depth image, where the discontinuities are cloned from the high-resolution disparity
image. Meanwhile, our unique way of enforcing the depth reconstruction constraint gives a high-resolution depth
image that is perfectly consistent with its original input depth image. We show the effectiveness of the proposed
method both quantitatively and qualitatively, comparing the proposed method with two existing methods. The
experimental results demonstrate that the proposed method gives sharp high-resolution depth images with less
error than the two methods for scale factors of 2, 4, and 8.
This paper presents a novel Time-of-Flight (ToF) depth denoising algorithm based on parametric noise modeling.
ToF depth image includes space varying noise which is related to IR intensity value at each pixel. By assuming
ToF depth noise as additive white Gaussian noise, ToF depth noise can be modeled by using a power function
of IR intensity. Meanwhile, nonlocal means filter is popularly used as an edge-preserving denoising method
for removing additive Gaussian noise. To remove space varying depth noise, we propose an adaptive nonlocal
means filtering. According to the estimated noise, the search window and weighting coefficient are adaptively
determined at each pixel so that pixels with large noise variance are strongly filtered and pixels with small
noise variance are weakly filtered. Experimental results demonstrate that the proposed algorithm provides good
denoising performance while preserving details or edges compared to the typical nonlocal means filtering.