A new spatial-domain Blur Equalization Technique (BET) is presented. BET is based on Depth-from-Defocus
(DFD) technique. It relies on equalizing the <i>blur</i> or <i>defocus</i> of two different images recorded with different
camera parameters. Also, BET facilitates modeling of images locally by higher order polynomials with lower
series truncation errors. The accuracy of BET is further enhanced by discarding pixels with low Signal-to-Noise
ratio by thresholding image Laplacians, and relying more on sharper of the two blurred images in estimating
the blur parameters. BET is found to be superior to some of the best comparable DFD techniques in a large
number of both simulation and actual experiments. Actual experiments used a large variety of objects including
very low contrast digital camera test charts located at many different distances. In autofocusing experiments,
BET gave an RMS error of 1.2% in lens position.
In this paper, several binary mask based Depth From Defocus (DFD) algorithms are proposed to improve autofocusing performance and robustness. A binary mask is defined by thresholding image Laplacian to remove unreliable points with low Signal-to-Noise Ratio (SNR). Three different DFD schemes-- with/without spatial integration and with/without squaring-- are investigated and evaluated, both through simulation and actual experiments. The actual experiments use a large variety of objects including very low contrast Ogata test charts. Experimental results show that autofocusing RMS step error is less than 2.6 lens steps, which corresponds to 1.73%. Although our discussion in this paper is mainly focused on a spatial domain method STM1, this technique should be of general value for different approaches such as STM2 and other spatial domain based algorithms.
Real-time and accurate autofocusing of stationary and moving objects is an important problem in modern digital cameras. Depth From Defocus (DFD) is a technique for autofocusing that needs only two or three images recorded with different camera parameters. In practice, there exist many factors that affect the performance of DFD algorithms, such as <i>nonlinear sensor response, lens vignetting, and magnification variation</i>. In this paper, we present calibration methods and algorithms for these three factors. Their correctness and effects on the performance of DFD have been investigated with experiments.
A new technique is proposed for calibrating a 3D modeling system with variable zoom based on multi-view stereo image analysis. The 3D modeling system uses a stereo camera with variable zoom setting and a turntable for rotating an object. Given an object whose complete 3D model (mesh and texture-map) needs to be generated, the object is placed on the turntable and stereo images of the object are captured from multiple views by rotating the turntable. Partial 3D models generated from different views are integrated to obtain a complete 3D model of the object. Changing the zoom to accommodate objects of different sizes and at different distances from the stereo camera changes several internal camera parameters such as focal length and image center. Also, the parameters of the rotation axis of the turntable changes. We present camera calibration techniques for estimating the camera parameters and the rotation axis for different zoom settings. The Perspective Projection Matrices (PPM) of the cameras are calibrated at a selected set of zoom settings. The PPM is decomposed into intrinsic parameters, orientation angles, and translation vectors. Camera parameters at an arbitrary intermediate zoom setting are estimated from the nearest calibrated zoom positions through interpolation. A performance evaluation of this technique is presented with experimental results. We also present a refinement technique for stereo rectification that improves partial shape recovery. And the rotation axis of multi-view at different zoom setting is estimated without further calibration. Complete 3D models obtained with our techniques are presented.
With the development of computer technology and CCD sensor, 3D sensing with sinusoidal structure illumination are widely used. To obtain the shape information of a 3D object, a procedure called spatial modulation-demodulation is performed. In most practical methods such as PMP, FTP, SPD or MMP, sinusoidal structured illumination is utilized as the carrier of spatial modulation, and the quality of it plays an important role in the measurement result. But it is very difficult to produce a perfect sinusoidal grating. The idea of Error Diffusion algorithm is first used to produce a better sinusoidal illumination structure. 1D and 2D sinusoidal module is designed. A computer simulation system base on PMP model is established to implement Error Diffusion modulation to structured light field. Computer simulations have verified the efficiency of Error Diffusion grating.
In industrial applications some single point laser range sensing systems by triangulation are often used. There are many algorithms to determine the center position. But there are basic physical limits ofaccuracy ofcenter position which is introduced by laser speckle on image. In this paper we conduct a computer simulation on the intensity distribution and graity of image speckle pattern produced under the completely coherent illumination, and give a direct result on some possible factors such as the observation aperture of an imaging system, the surface roughness, the number of microtopology units included in the point spread function and the width of illumination area. The result confirms the dependence of distance uncertainty on the observation aperture of an imaging system. A comparison between the simulation and earlier theoretical results is also given.