A new technique is proposed for calibrating a 3D modeling system with variable zoom based on multi-view stereo image analysis. The 3D modeling system uses a stereo camera with variable zoom setting and a turntable for rotating an object. Given an object whose complete 3D model (mesh and texture-map) needs to be generated, the object is placed on the turntable and stereo images of the object are captured from multiple views by rotating the turntable. Partial 3D models generated from different views are integrated to obtain a complete 3D model of the object. Changing the zoom to accommodate objects of different sizes and at different distances from the stereo camera changes several internal camera parameters such as focal length and image center. Also, the parameters of the rotation axis of the turntable changes. We present camera calibration techniques for estimating the camera parameters and the rotation axis for different zoom settings. The Perspective Projection Matrices (PPM) of the cameras are calibrated at a selected set of zoom settings. The PPM is decomposed into intrinsic parameters, orientation angles, and translation vectors. Camera parameters at an arbitrary intermediate zoom setting are estimated from the nearest calibrated zoom positions through interpolation. A performance evaluation of this technique is presented with experimental results. We also present a refinement technique for stereo rectification that improves partial shape recovery. And the rotation axis of multi-view at different zoom setting is estimated without further calibration. Complete 3D models obtained with our techniques are presented.
We present the development of a parallel-axis stereoscopic imaging camera (PASIC) for teleoperation and depth measurement. To reduce the vergence focus decoupling problem, which could cause cybersickness, we design a mechanism to control the vergence and the focus of the camera simultaneously. We analyze the relationship between parallel and orthogonal motions of the camera lens system with respect to the image plane. Under the condition that the lateral disparity in stereo images remains constant, the parallel motion causing vergence and the vertical motion causing focus show linear relationship. The camera is calibrated to a reference distance, and the depth to an object is computed by a triangulation technique and a calibration lookup table. We analyze the accuracy of depth measurement with the consideration of radial distortion. Then the calibration table is refined accordingly.
A new technique is introduced for registration and integration of multiple partial 3D models of an object. The technique exploits the epipolar constraint for the multiple- view geometry. Partial 3D shapes of an object from multiple viewing directions are obtained using a digital vision system based on parallel-axis stereo. The vision system is calibrated to obtain an initial transformation matrix for both the stereo imaging geometry and the multiple-view geometry. A multi-resolution stereo matching approach is used for partial 3D shape recovery. The partial 3D shapes are registered approximately using the initial transformation matrix. The initial transformation matrix is then refined by iteratively minimizing the registration error. At this step, a modified Iterative Closest Point (ICP) algorithm is used for matching corresponding points in two different views. A given point in one view is projected to another view using the transformation matrix, and a search is made for a closest point in the other view that lies on the epipolar line. A similar idea is used during partial model integration step to obtain improved results. Partial models are represented as linked lists of segments and integrated segment by segment. Experimental results are presented to show the effectiveness of the new technique.
New algorithms are presented for automatically acquiring the complete 3D model of single and multiple objects using rotational stereo. The object is placed on a rotation stage. Stereo images for several viewing directions are taken by rotating the object by known angles. Partial 3D shapes and the corresponding texture maps are obtained using rotational stereo and shape from focus. First, for each view, shape from focus is used to obtain a rough 3D shape and the corresponding focused image. Then, the rough 3D shape and focused images are used in rotational stereo to obtain a more accurate measurement of 3D shape. The rotation axis is calibrated using three fixed points on a planar object and refined during surface integration. The complete 3D model is reconstructed by integrating partial 3D shapes and the corresponding texture maps of the object from multiple views. New algorithms for range image registration, surface integration and texture mapping are presented. Our method can generate 3D models very fast and preserve the texture of objects. A new prototype vision system named Stonybrook VIsion System 2 (SVIS-2) has been built and used in the experiments. In the experiments, 4 viewing directions at 90-degree intervals are used. SVIS-2 can acquire the 3D model of objects within a 250 mm x 250 mm x 250 mm cubic workspace placed about 750 mm from the camera. Both computational algorithms and experimental results on several objects are presented.
A stereo vision system, as it receives two images equivalent to the one shown on left and right human eyes, can provide 3-D effect. However, the stereo disparity caused by the different parallax of the two images makes an observer feel fatigued and reduces the 3-D effect. Therefore, this paper presents a new approach to keep the stereo disparity to be zero via a JTC- based adaptive tracking of a moving object. In this method, the optical JTC system tracks the relative locations of a moving objects via measuring the correlation peak of the two images. Through some optical experiments the proposed stereo vision system is proved to be insensitive to background noises and control the convergence angle in real-time.
The geometry between the horizontal and the vertical shifts of the lens to the CCD plane is introduced for the automatic vergence control of the parallel stereo camera. Under the condition that the disparity of stereo image remains constant, the horizontal shift of camera lens causing stereo disparity and the vertical shift causing focus have linear geometry. With this geometry, a simple auto-focusing algorithm is applied to the stereo camera lens for the vergence control of the parallel stereo camera.
In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.