19 January 1996 Real-time computation of depth from defocus
Author Affiliations +
Abstract
A new range sensing method based on depth from defocus is described. It uses illumination pattern projection to give texture to the object surface. Then the image of the scene is split into two images with different focus settings and sensed simultaneously. The contrast map of the two images are computed and compared pixel by pixel to produce a dense depth map. The illumination pattern and the focus operator to extract the contrast map are designed to achieve finest spatial resolution of the computed depth map and to maximize response of the focus operator. As the algorithm uses only local operations such as convolution and lookup table, the depth map can be computed rapidly on a data-flow image processing hardware. As this projects an illumination pattern and detects the two images with different focus setting from exactly the same direction, it does not share the problem of shadowing and occlusion with triangulation based method and stereo. Its speed and accuracy are demonstrated using a prototype system. The prototype generates 512 by 480 range maps at 30 frame/sec with a depth resolution of 0.3% relative to the object distance. The proposed sensor is composed of off-the-shelf components and outperforms commercial range sensors through its ability to produce complete three-dimensional shape information at video rate.
© (1996) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Masahiro Watanabe, Shree K. Nayar, Minori N. Noguchi, "Real-time computation of depth from defocus", Proc. SPIE 2599, Three-Dimensional and Unconventional Imaging for Industrial Inspection and Metrology, (19 January 1996); doi: 10.1117/12.230388; https://doi.org/10.1117/12.230388
PROCEEDINGS
12 PAGES


SHARE
Back to Top