Translator Disclaimer
7 March 2014 Depth from defocus using the mean spectral ratio
Author Affiliations +
Proceedings Volume 9023, Digital Photography X; 90230H (2014)
Event: IS&T/SPIE Electronic Imaging, 2014, San Francisco, California, United States
Depth from defocus aims to estimate scene depth from two or more photos captured with differing camera parameters, such as lens aperture or focus, by characterizing the difference in image blur. In the absence of noise, the ratio of Fourier transforms of two corresponding image patches captured under differing focus conditions reduces to the ratio of the optical transfer functions, since the contribution from the scene cancels. For a focus or aperture bracket, the shape of this spectral ratio depends on object depth. Imaging noise complicates matters, introducing biases that vary with object texture, making extraction of a reliable depth value from the spectral ratio difficult. We propose taking the mean of the complex valued spectral ratio over an image tile as a depth measure. This has the advantage of cancelling much of the effect of noise and significantly reduces depth bias compared to characterizing only the modulus of the spectral ratio. This method is fast to calculate and we do not need to assume any shape for the optical transfer function, such as a Gaussian approximation. Experiments with real world photographic imaging geometries show our method produces depth maps with greater tolerance to varying object texture than several previous depth from defocus methods.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
David Morgan-Mar and Matthew R. Arnison "Depth from defocus using the mean spectral ratio", Proc. SPIE 9023, Digital Photography X, 90230H (7 March 2014);

Back to Top