Conventional auto-focus techniques in movable-lens camera systems use a measure of image sharpness to determine the
lens position that brings the scene into focus. This paper presents a novel wavelet-domain approach to determine the
position of best focus. In contrast to current techniques, the proposed algorithm estimates the level of blur in the captured
image at each lens position. Image blur is quantified by fitting a Generalized Gaussian Density (GGD) curve to a high-pass
version of the image using second-order statistics. The system then moves the lens to the position that yields the
least measure of image blur. The algorithm overcomes shortcomings of sharpness-based approaches, namely, the
application of large band-pass filters, sensitivity to image noise and need for calibration under different imaging
conditions. Since noise has no effect on the proposed blur metric, the algorithm works with a short filter and is devoid of
parameter tuning. Furthermore, the algorithm could be simplified to use a single high-pass filter to reduce complexity.
These advantages, along with the optimization presented in the paper, make the proposed algorithm very attractive for
hardware implementation on cell phones. Experiments prove that the algorithm performs well in the presence of noise as
well as resolution and data scaling.
Stereoscopic cameras consist of two camera modules that in theory are mounted parallel to each other at a fixed distance
along a single plane. Practical tolerances in the manufacturing and assembly process can, however, cause mismatches in
the relative orientation of the modules. One solution to this problem is to design sensors that image a larger field-of-view
than is necessary to meet system specifications. This requires the computation of the sensor oversize needed to
compensate for the various types of mismatch. This work presents a mathematical framework to determine these
oversize values for mismatch along each of the six degrees of freedom. One module is considered as the reference and
the extreme rays of the field-of-view of the second sensor are traced in order to derive equations for the required
horizontal and vertical oversize. As a further application, by modeling user hand-shake as the displacement of the sensor
from its intended position, these deterministic equations could be used to estimate the sensor oversize required to
stabilize images that are captured using cell phones.
Hyperspectral images are acquired incrementally in a “push-broom” fashion by on-board sensors. Since these images are highly voluminous, buffering an entire image before compression requires a large buffer and causes latency. Incremental compression schemes work on small chunks of raw data as soon as they are acquired and help reduce buffer memory requirements. However, incremental processing leads to large variations in quality across the reconstructed image. The solution to this problem lies in using carefully designed rate control algorithms. We propose two such “leaky bucket” rate control algorithms that can be employed when incrementally compressing hyperspectral images using the JPEG2000 compression engine. They are the Multi-Layer Sliding Window Rate Controller (M-SWRC) and the Multi-Layer Extended Sliding Window Rate Controller (M-EWRC). Both schemes perform rate control using the fine granularity afforded by JPEG2000 bitstreams. The proposed algorithms have low memory requirements since they buffer compressed bitstreams rather than raw image data. Our schemes enable SNR scalability through the use of quality layers in the codestream and produce JPEG2000 compliant multi-layer codestreams at a fraction of the memory used by conventional schemes. Experiments show that the proposed schemes provide significant reduction in quality variation with no loss in mean overall PSNR performance.