Depth from defocus aims to estimate scene depth from two or more photos captured with differing camera parameters,
such as lens aperture or focus, by characterizing the difference in image blur. In the absence of noise, the ratio of Fourier
transforms of two corresponding image patches captured under differing focus conditions reduces to the ratio of the
optical transfer functions, since the contribution from the scene cancels. For a focus or aperture bracket, the shape of this
spectral ratio depends on object depth. Imaging noise complicates matters, introducing biases that vary with object
texture, making extraction of a reliable depth value from the spectral ratio difficult. We propose taking the mean of the
complex valued spectral ratio over an image tile as a depth measure. This has the advantage of cancelling much of the
effect of noise and significantly reduces depth bias compared to characterizing only the modulus of the spectral ratio.
This method is fast to calculate and we do not need to assume any shape for the optical transfer function, such as a
Gaussian approximation. Experiments with real world photographic imaging geometries show our method produces
depth maps with greater tolerance to varying object texture than several previous depth from defocus methods.