PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The paper addresses a new high performance camera calibration algorithm for 3-D computer vision systems, which is named as DLTEA-II. In this algorithm, the lens distortion model is modified by considering the difference of pixel size in digital images, so as to make the model more suitable for the correction of lens distortion in the digital image case. The camera model and revised lens distortion parameters are solved by DLT (Direct Linear Transformation) method. A third-order polynomial to fit image residual errors is used for compensating systematic errors in the algorithm, which improves camera calibration accuracy further. In order to reduce the effect of image location errors to camera calibration, we explore the LOG zero-crossing detection and subpixel location algorithms to detect the control points in the digital image. Experiments show that a stereo vision system with off-the-shelf CCD cameras (resolution 512 by 512) and TV lenses (f equals 16 mm or 25 mm) calibrated by DLTEA-II has the average relative accuracy of 1/10000 over the field of view (300 mm by 300 mm) and over the depth (0.8 m approximately 0.9 m). The image residual error is less than 0.1 pixel. The experiment results with using lenses of different focal lengths are also reported, in which most of average accuracy is about 1/8000 approximately 1/10000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a calibration technique for cameras with several types of distortion lenses. The algorithm decomposes the calibration parameters into nondistortion parameters and distortion parameters, and the estimation is performed using the weighted least-squares method. The validity and the performance of the calibration algorithm is tested on two stereo cameras mounted in a stereoscopic bank. Experimental results show the feasibility and the performance of the calibration algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a calibration procedure adapted to a range camera intended for space applications. The range camera, which is based upon an auto-synchronized triangulation scheme, can measure objects from about 0.5 m to 100 m. The field of view is 30 degree(s) X 30 degree(s). Objects situated at distances beyond 10 m can be measured with the help of cooperative targets. Such a large volume of measurement presents significant challenges to a precise calibration. A two-step methodology is proposed. In the first step, the close-range volume (from 0.5 m to 1.5 m) is calibrated using an array of targets positioned at known locations in the field of view of the range camera. A large number of targets are evenly spaced in that field of view because this is the region of highest precision. In the second step, several targets are positioned at distances greater than 1.5 m with the help of an accurate theodolite and electronic distance measuring device. This second step will not be discussed fully here. The internal and external parameters of a model of the range camera are extracted with an iterative nonlinear simultaneous least-squares adjustment method. Experimental results obtained for a close-range calibration suggest a precision along the x, y and z axes of 200 micrometers , 200 micrometers , and 250 micrometers , respectively, and a bias of less than 100 micrometers in all directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The calibration steps of a 3-D data acquisition system based on the ratio of two intensity images are described. This is a triangulation-based structural light technique in which a multiple of projected light planes are identified by the ratio of two measured intensities. The calibration scheme can be divided in two main steps, which are described in detail: camera calibration and light plane calibration. Camera calibration consists of the determination of some geometric camera parameters that allow the determination of the line of sight of each pixel. Light plane calibration consists of the determination of the correspondence between the measured ratios and the position of the light planes. Some preliminary operations which ease the calculation of depth are also described. System accuracy is evaluated and results on test scenes are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evaluation of Vision Systems for Precision Measurement I
Video imagers operating in the thermal infra-red part of the electro- magnetic spectrum have many different characteristics to those operating in the visible part of the spectrum. In particular, the highest performance thermal imagers incorporate a scanning mechanism which gives rise to a unique geometry. This paper outlines the main characteristics of thermal video frame scanners and gives an account of the methods devised and implemented by the authors to carry out the geometric calibration of these scanners. The results obtained from the calibration of seven frame scanners from different manufacturers are given. Analysis of these results shows that specific polynomial transformations can be derived and used successfully to remove the effects of the geometric distortions present in the video images obtained from thermal frame scanners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the linking of a two-dimensional triangulation-based scanning system with a very high frame rate (250 full frames per second) to a high speed image processing system as well as a special purpose software which has been designed for the interactive experimental evaluation of suitable image processing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Pulnix TM6CN CCD camera appears to be a suitable choice for many close range photogrammetric applications where the cost of the final system is a factor. The reasons for this are: its small size, low power consumption, pixel clock output, variable electronic shutter, and relatively high resolution. However, to have any confidence in such a camera a thorough examination is required to assess its characteristics. In this paper an investigation of three of these cameras is described, and their suitability for close range photogrammetry evaluated. The main factors assessed are system component influences, warm-up effects, line jitter, principal point location and lens calibration. The influence of the frame-store on the use of the camera is also estimated and where possible excluded. Results of using these cameras for close range measurement are given and analyzed. While many users will have or prefer to buy other cameras, the evaluation of this particular camera should give an understanding of the important features of such image sensors, their use in photogrammetric measuring systems and the processes of evaluating their physical properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evaluation of Vision Systems for Precision Measurement II
A clear advantage of digital photogrammetric measurement over other, more conventional techniques in the fast sample rate of the data acquisition. CCD cameras and video systems can be used very effectively to analyze dynamic objects or cases of rapid deformation. However, long sequences of images can introduce the penalty of large volumes of digital data, which may not be able or appropriate to be processed in real time. The images are typically stored in analog form, using media such as video tape or video disk, for off line processing subsequent to the image capture. This paper investigates the degradation in accuracy and repeatability caused by the influence of the analog recording. A number of experiments using a Hitachi medium resolution CCD camera, a three dimensional test range and a self calibrating bundle adjustment are described. For cases of near real time monitoring, the ability of frame averaging to reduce the degradation caused by the analog recording is also investigated. The results of the experiments are presented in summary to provide some guidelines as to the degree of degradation which can be expected under similar circumstances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frame grabbers perform an essential role in image acquisition from CCD-cameras and other imaging devices or intermediate storage systems such as VCRs. They directly affect the radiometric and geometric quality of the digital imagery. This paper gives an overview of the most important functional elements of frame grabbers. Their influence on the image quality and ways for their assessment are discussed. Results from two frame grabbers are used to explain the methodology and to give indications on typical performance measures. Errors found in the investigations include DC-restoration, effects of low-pass-filters, and synchronization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camcorders offer several advantages over standard CCD-cameras. They are easy to use, transportable, provide a large integrated storage capacity and are inexpensive. This makes them ideal image acquisition devices for many tasks in Digital Photogrammetry. They were not designed for image metrology. Their geometric accuracy performance is potentially degraded by a number of factors. This paper gives a qualitative analysis of degrading effects from camera (color sensor and electronics) and video tape recording. Furthermore playback on another VCR, copying, and multiple playback are addressed. The investigation of their geometric and radiometric characteristics showed large response non-uniformities and degradations of the linearity. Geometric deformations were found to depend on the recording/playback configuration and shear components exceeding 1 pixel were detected. The warm-up-time was similar to that of standard CCD-cameras. The geometric accuracy performance was evaluated with a three-dimensional testfield. The color sensor and electronics reduced the accuracy by a factor of 2.8 and the intermediate storage on video tape introduced an additional performance degradation by a factor of 1.4. Accuracies with direct digitization and intermediate storage were 1/25th and 1/20th of the pixel spacing respectively. The corresponding relative accuracies in object space were 1/17.000 and 1/20.000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work describes an experimental setup for characterizing the calibration of a vision-based dimensional measurement system made up by off-the-shelf components, i.e. a commercial TV camera with standard lens, a frame grabber and a personal computer. Short- and long-term stability of the measurements with respect to time and to variations in environmental conditions are considered, as well as the repeatability of the same measurements after switching the equipment off and on. The reported results should be useful in evaluating the reliability of vision-based measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work on the task of automating sensor placement for inspection has focussed primarily on deriving a single view of a set of object features that will satisfy a set of basic inspection requirements. From a single view, however, accurate dimensional inspection cannot be carried out. This goal can only be achieved through the use of multi-station sensor configuration in combination with the method of optical triangulation. In this paper, fundamental issues regarding the design of multi-station configurations are addressed. Firstly, the bundle method is introduced as a powerful and flexible mathematical tool for the estimation of object feature coordinates and analysis of the quality (precision and reliability) of these estimates. Secondly, basic considerations leading to the design of configurations which satisfy accuracy specifications are reviewed. The impact of these considerations on the placement of individual sensor stations is examined. Finally, development of an expert system being built on the expertise of photogrammetrists and design considerations knowledge for automating the design of multi-station configurations task is outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two computation algorithms are described for the estimation of position and orientation of objects with the help of camera observations. The algorithms are based on point-to-point correspondence between observed and predicted locations of target points on the object. The target points are natural visible points, e.g. screw holes of a car body on an assembly line. The locations of these points are known in the object's own coordinate frame. The sensitivity of the uncertainty of the locating result to random errors from different sources is also studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A close-range application for medical purposes uses video CCD cameras and a control field of small balls for camera calibration and orientation. The spherical shape enables robust detection as the image of ball must always be very similar to a circle. Partly occluded or shadowed control points can be detected and rejected. On the other hand this shape is sensitive to illumination effect which influences the point location procedures. Erroneously located points might not be detected during orientation, thus causing distortions of the restituted 3D object. This article investigates the effect which can occur and checks the robustness of various point locating methods by using simulated examples. Edge extraction and feature matching can deliver good results. Finally one method based on Forstner operator is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The steps to be developed for a machine vision system for a flexible manufacturing system (FMS) are: image processing techniques for the extraction of features on the industrial components and hence the 3-dimensional measurements of these components; representation of the 3-dimensional objects in an efficient and convenient data structure; and decision procedures for the recognition of the objects from those stored in the design database of the flexible manufacturing system. Possible approaches to each of the above procedures will be reviewed. The initial task in this research has investigated methods of detection of the edges of manufactured components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tools for constructing geometric model data from noisy 3-D sensor measurements of physical parts are presented. Optical non-contact sensors provide only partial data from one viewpoint because the signal does not reach all the surface points at a time. In order to obtain a complete 3-D data set, several data sets from different vantage points have to be merged. An approach for solving the relative transformation between the viewpoints is presented. The method is based on the iterative closest point algorithm which does not require feature-to-feature correspondences. Additional constraints on visibility and surface normals consistency are imposed in order to avoid incorrect matches. A mean square sum of the distances between observations and planar primitives is minimized. Surface triangulations and NURBS approximation are employed in model construction experiments. Registration and model construction examples are given using simulated and real range data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new method capable of segmenting range images into a set of Bezier surface patches directly compatible with most CAD systems. The algorithm is divided into four parts. First, an initial partition of the data set into regions, following a third-order Bezier model, is performed using a robust fitting algorithm constrained by the position of depth and orientation discontinuities. Second, an optimal region growing based on a new Bayesian decision criteria is computed. Third, generalization to a higher-order surface model is performed based on a statistical decision method. Fourth, at the final resolution, an approximation of the surface boundary is computed using a two-dimensional B-spline. The algorithm is fully automatic and does not require adhoc parameter adjustment. Experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computer integration of the processes involved in the design and manufacture of a product is not attainable within the existence of a CAD model for the part. The CAD model is the source for many manufacturing automation processes such as automated process planning, fixture selection, etc. The process of CAD model generation for existing products is the focus of this paper. Mainly two problems are considered: (1) combining contact and non-contact scanning systems to achieve accuracy and automation of part surface digitization and, (2) defining the exact geometry of the object surface to create an engineering drawing for part production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The possibility to describe mathematically the body surfaces could improve diagnosis and objective evaluation of deformities, the follow up of progressive diseases and could represent a useful tool for other medical sectors as prosthetic and plastic surgery as well as for industrial applications where a real shape needs to be digitized and analyzed or modified mathematically. The approach here presented is based on the acquisition of a surface scanned by a laser beam. The 3D coordinates of the spot generated on the surface by the beam are obtained by an automatic image analyzer (ELITE system), originally developed for human motion analysis. The 3D coordinates are obtained by stereo-photogrammetry starting from at least two different view of the subject. A software package for graphic representation of the obtained surfaces has been developed and some preliminary results about some body shapes will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the difficult problems encountered in range image data is eliminating noise without removing structure. Generalized Cross Validation (GCV) is one method for determining filter size to achieve this compromise. GCV has been employed with the Gaussian filter to choose among the infinite number of filter sizes available for smoothing range data. The Gaussian filter is a desirable filter because of its scale space properties. In this study, noise range data was estimated and GCV was used to determine Gaussian filter size. GCV provides an effective way for solving range image problems where noise level information is not available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes the results we have obtained in searching for an efficient, robust, but accurate method for the estimation of the camera (or spacecraft) position based on image measurements. Based on a sequence of images acquired with a moving camera, the task is to estimate the camera position (extrinsic calibration) from corresponding points (landmarks) in the various frames. Using the noisy estimate of the camera parameters, the 3D scene points can be reconstructed. In the paper we first describe a new calibration method, and show the improvements in terms of accuracy compared to known methods. The method is studied in an application to motion estimation for a spacecraft during the orbit and landing maneuvers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the Active Stereo Probe (ASP) project is the on-line recovery of 3D surfaces from stereo images captured using a dynamic binocular robot vision system. In this paper, we present results of 3D surface recovery using scale-space automatic stereo-matching. These results have been considerably enhanced by bathing the scene in textured light provided by the ASP active illumination source. We also describe a two stage approach that incorporates photogrammetric techniques into the ASP system to maintain calibration during dynamic system operation. Direct Linear Transform based calibration provides an initial static calibration. Thereafter, dynamic calibration is maintained by exploiting high resolution encoders to track the systems external orientation parameters and thereby constrain the search space of subsequent bundle adjustment. We expect this strategy to achieve the speed and accuracy required to satisfy many on-line 3D surface recovery applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient method for tracking the attitude and translation of an object from a sequence of dense range images is demonstrated. The range data is sine encoded and then Fourier transformed. By this process, planar surfaces in the image produce distinctive peaks in the FT plane; the position of a peak provides a direct measure of the normal of the plane. By tracking the position of these peaks, the orientation of a known object can be tracked directly. The Fourier transform can also be used for segmentation. After computing and undoing the rotation, the translation is obtained by measuring the centroid of the segmented planar surfaces. The object must present at least three planar surfaces, for determining absolute orientation and translation. All computations are closed form. The integrative nature of the Fourier transform is very effective in attenuating the effect of noise and outliers. Results of tracking an object in a simulated range image sequence show high accuracy potential (0.2 deg. in orientation and 0.5% rms error in translation with respect to the dimension of the object). The method promises real-time (video rate) performance with the addition of accelerator hardware for computing the Fourier transform. The approach is well suited for dynamic robotic vision applications. Furthermore, the translation invariance of the Fourier transform essentially decouples the rotational and translational components of the motion, this contributes to the tracking performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.