This paper will address the topic of multisensor self-calibration in vision metrology systems employing still-video imagery. Under this approach greater fidelity is afforded in the recovery of sensor interior orientation and distortion parameters. Moreover, a combination of wide- and narrow-angle imaging sensors affords a better insight into the problems of in-plane image distortion and out-of-plane sensor surface deformation, which are often overlooked sources of error in metric applications of CCD cameras. This on-site calibration approach is demonstrated in the measurement, to better than 1 to 100,000 accuracy, of a 5m X 2.5m bond tool used in aircraft manufacturing.
This report summarizes an investigation of zoom lens calibration, with emphasis on the effects of lens-image-plane misalignment. Measurements have been made of the photogrammetric principal point and radial (symmetrical) and decentering (asymmetrical) distortion components as a function of the principal distance (zoom setting) of several zoom lenses. Data were also taken with the axis of symmetry (optical axis) of a zoom lens aligned and misaligned to the same solid-state video camera. An explanation is offered regarding the variation of the principal point as a function of zoom setting based on these measurements. In addition the relationship of the decentering distortion to radial distortion, principal distance, and lens- image-plane misalignment angle is discussed. A technique for determining the proper point of symmetry to be used for distortion computations (as opposed to the principal point) is also suggested. A simple technique for measuring the misalignment angle of zoom lenses when attached to video cameras is presented, along with measurements for seven solid-state cameras. A method to reduce the additional error introduced by zoom lens misalignment is presented. The implications of this study are that special measures to properly align a zoom lens to the sensor image plane are probably not necessary, but that as the accuracy obtainable in digital photogrammetry approaches the 0.01 or less pixel level, additional calibration including the point of symmetry for distortion computation should be considered.
Image point displacements due to systematic errors in the image formation process are typically modeled in analytical photogrammetry with polynomial expressions. An alternative to this approach is the concept that the displacement of an image point is equivalent to a proportional change in the camera focal length for that particular location. The finite element method (FEM) of self-calibration, as developed by R.A.H. Munjy can be used to model focal length changes due to inherent systematic errors. This paper presents the results of an investigation into the use of the FEM for charge-coupled device (CCD) camera calibration. Two CCD cameras were calibrated using both the polynomial approach and the FEM in order to determine the adequacy of this alternative model.
Spatial photoresponse nonuniformity of the detectors in staring IR-CCD camera must be corrected to preserve radiometric correct performance and achieve state-of-the-art sensitivity. A real-time implementation of the two-point nonuniformity correction algorithm, and some representative experimental results are illustrated. The prototype is realized in a computer system for the multiwavelength imaging pyrometer of the New Jersey Institute of Technology.
The digital high-resolution stillvideo camera Kodak DCS200 has reached a high degree of popularity among photogrammetrists within a very short time. Consisting of a mirror reflex camera, a high resolution CCD sensor, A/D conversion, power supply, and data storage capacity for 50 images, it can basically be considered a comfortable, autonomous device for digital image data acquisition, especially for industrial applications and for architectural photogrammetry. First tests of the camera showed a high precision potential: 1/20-1/30 pixel in image space could be achieved in several applications, and with large self-calibrating networks relative precisions of 1:100,000 and better have been reported. To be able to make more detailed statements on the accuracy potential of the camera, a thorough accuracy test was performed at ETH Zurich by taking 150 images of a 186 target 3D testfield. Although the precision estimates of this large block were exceptionally good, strong systematic object deformations were found in comparison with theodolite-measured reference coordinates of the testfield points. The reasons for these deformations are most probably temporal instabilities of some camera parameters, which could make the use of this camera very problematic for high accuracy applications. It is argued that these instabilities are caused by the weak fixture of the CCD-chip to the camera body. In this context it is often overlooked that this camera was not developed for precise measurement applications but rather for professional photographers.
Digital camera backs to be connected with analogue photographic cameras have been developed for professional in-studio and in-field photography. The acquired high resolution digital images are also suitable for photogrammetric purposes. Technical specifications of these imaging systems are listed in this paper. The metric quality of selected camera backs is determined.
Close range photogrammetry and vision metrology often use signalized points in the form of active or passive targets. Many theoretical and some practical tests of different target image centering algorithms have been carried out. This paper will describe the empirical testing of several such algorithms using real data acquired for industrial measurement projects and camera calibrations. The precision and accuracy of the centering algorithms will be characterized by analysis of self calibrating network solutions using multiple camera stations and a target array. Particular emphasis will be placed on the comparison between centroiding and ellipse fitting to locate target image center.
The simulation of the performance of subpixel algorithms under known conditions is a valuable tool. For instance, the optimum size for a target can be determined as well as the influence of varying the threshold level or target size. Previous work looked at the effects of quantization and additive noise on the location of target images. More recently, work by Shortis et al, 1994 has tested more algorithms and looked at other effects such as those caused by saturation and DC offset. In this paper the electronic noise present in imagery from typical CCD cameras is both measured and used within a simulation of the subpixel location performance of the centroid and squared centroid method. The effect on target lcoation of uneven background characteristics are also analyzed. To achieve this the physical characteristics are modeled by using the Fourier transform of both background and target models and combining them in the frequency domain. By performing the inverse Fourier transform the resulting target image is used to locate the target. The simulation methodology will be explained and tests performed so that a better understanding of the factors that contribute to subpixel errors can be gained.
The optimally accurate focus measure for a noisy camera in passive search based autofocusing and depth-from-focus applications depends not only on the camera characteristics but also the image of the object being focused or ranged. In this paper a new metric named autofocusing uncertainty measure (AUM) is defined which is useful in selecting the most accurate focus measure from a given set of focus measures. AUM is a metric for comparing the noise sensitivity of different focus measures. It is similar to the traditional root-mean-square (RMS) error, but, while RMS error cannot be computed in practical applications, AUM can be computed easily. AUM is based on a theoretical noise sensitivity analysis of focus measures. In comparison, all known work on comparing the noise sensitivity of focus measures have been a combination of subjective judgement and experimental observations. For a given camera, the optimally accurate focus measure may change from one object to the other depending on their focused images. Therefore selecting the optimal focus measure from a given set involves computing all focus measures in the set. However, if computation needs to be minimized, then it is argued that energy of the Laplacian of the image is a good focus measure and is recommended for use in practical applications.
In the compilation of archival records for archeological artifacts, true orthographic drawings of these artifacts have to be drawn by the archeologists themselves or part-timer, expending a great deal of time, labor, and skills. This paper describes the real time orthographic drawing system using a CCD camera. Finally, it demonstrates real time orthographic drawing results for Jomon pottery by using this system instead of the manual method which requires 3-4 hours.
The purpose of this paper was to design an automatic system for transform 2D orthographic views to 3D solid objects. The input drawing contains geometric information of lines and circles. The reconstructed objects may be boxes, cylinders and their composites. This system used AutoCAD as a drawing tool. An input 2D orthographic view was created by using this package drawing editor. By using data interchange file (DXF) capabilities the application programs can access AutoCAD database. The script facility was used to execute the set of drawing commands which will create a continuous running display for output. The system was implemented in 7 steps. First, the 2D drawing was created and saved in ASCII code. Then DXF file was created and extracted into drawing commands. The transitional sweep operation was used to reconstruct subparts. The relationships between subparts are utilized to compose the final part. Finally the 3D solid object was displayed.
This paper presents a new 3D scene analysis system that automatically reconstructs the 3D geometric model of real-world scenes from multiple range images acquired by a laser range finder on board of a mobile robot. The reconstruction is achieved through an integrated procedure including range data acquisition, geometrical feature extraction, registration, and integration of multiple views. Different descriptions of the final 3D scene model are obtained: a polygonal triangular mesh, a surface description in terms of planar and biquadratics surfaces, and a 3D boundary representation. Relevant experimental results from the complete 3D scene modeling are presented. Direct applications of this technique include 3D reconstruction and/or update of architectual or industrial plans into a CAD model, design verification of buildings, navigation of autonomous robots, and input to virtual reality systems.
The contribution aims at describing a computer-based structured light imaging system to be applied to automated recovery of quantitative 3D information on sculptured surfaces, in order to take in charge (industrial) inspection/3D reconstruction tasks. Recovery is based on evaluation of images of the light pattern induced by projection into the scene of a specifically deviced parallel grid. The system has been designed for direct use in industrial environments, e.g. for integration into on-line quality control systems. Consequently, particular emphasis has been put on efforts for fulfilling requirements usually implied by this type of application, such as simplicity of set-up, application real-time, high accuracy, and low cost. This paper gives a discription of the system realized, including the algorithms specifically designed and implemented for calibration, nonambiguous labeling of the imaged fringes, and subpixel evaluation of their locations. The integration of the system into an on-line inspection system for 100% control of manufactured parts illustrates its application. Inspection is based on comparison of extracted features gained from a CAD model of the part and including tolerance information. Currently, a measurement accuracy of the order of 25 micrometers can be routinely achieved.
Pictometry is a proprietary digital imaging process which computationally maps each pixel of a digital land image to actual geographic coordinates, so that features in a mosaic of land images may be located and or measured.
An important step in many photogrammetry problems is to determine the intrinsic and extrinsic parameters associated with image formation, a process we shall term image resituation. An ideal photogrammetric system would be able to compute these parameters for an arbitrary nubmer of images using only the automatically determined coordinates of homologous points. While such a system has yet to be developed, this paper formulates a relatively general image resituation problem and presents some new solutions in the two-image case. The approach recasts the stereo coplanarity equation, including unknown intrinsic parameters, into a quandratic form defining a general coplanarity matrix. This leads to a system from which up to seven free imaging parameters can be determined. We consider the situation in which two images are captured using adaptive cameras, where the focal length and principal point may differ in each image, but are always in a known relationship to each other. We show that the unknown two focal lengths and five relative orientation parameters can be derived in closed-form from the general coplanarity matrix. A numerial example is provided to illustrate the approach. These results contibute to the long-term goal of developing a photogrammetric system able to operate in the absence of object-space control information. A companion paper describes more detailed algorithms and experiments.
Two-image resituation refers to the recovery of the geometric configuration of two stereo images. This involves determining three intrinsic parameters for each image and five relative orientation parameters. We show here that this can be achieved using only the image coordinates of homologous points, and needs no other control information from object space. The approach is based on a thorough analysis of epipolar constraints. The explicit coplanarity equation defined by the intrinsic and relative orientation parameters is recast into a quadratic form whose parameters define a general coplanarity matrix. This matrix in turn can be written as the product of three matrices, two of which are defined by the intrinsic parameters, and one, called the special coplanaraity matrix, is a function of the five relative orientation parameters. This paper presents a practical procedure for computing all these parameters from only image measurements. The basic strategy is first to find approximate values via closed- form solutions, and then to iteratively fine-tune them to precise values. The key steps are: 1) solving for the general coplanarity matrix via a nonlinear least-squares optimization; 2) solving for two focal lengths from the general coplanarity matrix via a closed-form algebraic solution; 3) determining the special coplanarity matrix from the general coplanarity matrix and the focal lengths; 4) determining the relative orientation parameters including three baseline components and three rotation angles vis closed-form solutions; 5) fine-tuning all the explicit parameters via a iterative linearized least-squares solution. Original or improved solutions are developed for most stages of this procedure. Finally, the computational theory is tested numerically.
Nontactile range sensors based on the lightstripe triangulation method are now widely accepted in production and quality control due to their speed and accuracy. However, some problems show up when extracting contour data due to the speckles induced by coherent lighting. Analyses have shown that the nature of this error is Gaussian, regardless of the nearly multiplicative nature of the speckle noise in the image data itself. Proceeding on this assumption, several approaches to eliminate this error and increase the subjective image quality will be shown in this paper. From a simple FIR-filter with a reectangular pulse response over a better adapted Gaussian pulse response FIR to finally a logarithmic-Gaussian FIR that takes the multiplicative nature of the noise into account, visual results and error reduction are discussed. The additional memory and processing power requirements are minimal since the pulse response of the filters is quite short and the logarithmic/exponential calculations can be performed on a look-up-table basis.
Quantitative analysis of the powder blending process is important in many industries, e.g. pharmaceutical, glass, food products. Inefficient blending can lead to inhomogeneous powder mixtures and unacceptable product variability. A new method has been devised by F.J. Muzzio and his students to characterize the uniformity of powder mixtures by solidifying samples of the mixtures without disturbing their structure, and subjecting them to machine vision analysis. The key components of the mixture are colored and, with appropriate illumination, the mixture percentage is directly related to video signal intensity. This paper reviews the machine vision algorithms required to perform the analysis, focussing in particular on the real-time hardware configurations that enable significant amounts of data to be collected for use in evaluation of the integrity of the blending process.
This paper deals with the problem of tracking object points from a sequence of image frames for the purpose of navigation, e.g. guiding a missile onto a target from an on-board camera. Image points may be tracked over a sequence of frames by a conventional correlation algorithm. By tracking several points, the motion parameters of the camera may be estimated. However, the presence of noise and the magnification of image features as the camera approaches the target may cause a tracked point to drift. This paper introduces an improved technique that integrates a multiple point correlation (MPC) tracker with image segmentation information to track image points with greater accuracy. Segmentation is the process of partitioning image pixels into regions, for example, of homogeneous grey values. Based on the resulting region characteristics, our method refines the point positions of the MPC tracker. At each frame the MPC is applied to the original image data. The location of each point given by the MPC tracker is used to identify the region in the segmented image that occupies that position. The tracked point is refined based on measurements made of the region. This paper details two region based refinement techniques used to improve tracking: one uses the centroid and the other uses corner points detected on a region's boundary. Experimental results based on real and synthetic images in both the infrared and visible spectrums shows the potential that this type of integration has for enhancing tracker performance.
Photogrammetry affords the only noncontact means of providing unambiguous six-degree-of- freedom estimates for rigid body motion analysis. Video technology enables convenient off- the-shelf capability for obtaining and storing image data at frame (30 Hz) or field (60 Hz) rates. Videometry combines these technologies with frame capture capability accessible to PCs to allow unavailable measurements critical to the study of rigid body dynamics. To effectively utilize this capability, however, some means of editing, post processing, and sorting substantial amounts of time coded video data is required. This paper discusses a prototype motion analysis system built around PC and video disk technology, which is proving useful in exploring applications of these concepts to rigid body tracking and deformation analysis. Calibration issues and user interactive software development associated with this project will be discussed, as will examples of measurement projects and data reduction.
Study of strata movement and geological discontinuities of rock surfaces in and around mining excavations is of prime importance to optimize safety, production, and productivity. The dynamic nature of the heterogeneous rock mass creates dangerous conditions so a fast remote measuring system having the capability of providing dense amounts of information may be suitable for such measurements. The real time measuring potential of CCD based digital photogrammetry is one of the best options for such a situation but the large depth of field in the object space and the lack of sufficient features on the surface of sedimentary rocks creates serious problems for conventional stereomatchers during automatic measurement. Diode laser based active triangulation is used to solve the correspondence problem but a number of practical problems arose during the design of a portable digital photogrammetric system for mining measurement. This paper addresses these problems with their possible solutions along with the initial results of the proposed measuring system. Subpixel target/features location is a prerequisite criteria for precise photogrammetric measurements from CCD images. A template matching technique is well suited for subpixel centroid location of measuring points in a textured image but it failed to provide the required level of accuracy for such images of physical model sude to changing textures of different off-the-shelf available measuring points.
A laser rangefinder-based optical coordinate measurement system used for monitoring refractory lining wear in steel mills has been equipped with a vision system to improve its operative and performance characteristics. The 3D shape of the refractory lining is measured after renewal at the beginning of a campaign and these data are stored as a reference. During the campaign the lining is measured and the results are compared against the reference data in order to minimize risks and optimize lining life. To make the results measured at different times comparable, they must be accurately and reliably transformed to the same coordinate system. This makes the coordinate system setup phase critical for the success of the lining wear monitoring. Other important aspects are the amount of expensive process time taken up by the measurements and work safety aspects. The experimental vision system has been tesetd for automating the corrdinate system setup phase, and improved repeatability and faster operation compared with manual setup was achieved. Tentative tests at a steel mill proved promising, and further development of the vision system is going on.
This paper deals with 3D-modeling from images of moving camera. The solution of 3D modeling is based on principle of LSQ-estimation, that the increasing number of observations is inversely proportional to effect of noise to estimation. The idea of the algorithm is to gather observations of linear features from multiple time varying video frames and perform simultaneous intersection and resection, i.e. triangulation, of 3D features. The observations are extracted from images by involving Hough transformation for edges detected by typical edge detector. All remaining pixels are used as observations for estimating feature parameters and intersection points of featuers as well as camera pose and orientation in 3D space. The algorithm presented here is off-line process where observations are gathered as background process. To combine observations from multiple frames feature feature matching has to be performed. To improve robustness of matching operator can add some constraints to matching process.
Due to the nature of many applications, it is difficult with present technology to use a single type of sensor to automatically, accurately, reliably, and completely measure or map, in 3D, objects, sites, or scenes. Therefore, a combination of various sensor technologies is usually the obvious solution. There are several 3D technologies, two of which; digital photogrammetry and triangulation laser scanning, are dealt with in this paper. The final accuracy achieved by the combination of various sensors is a factor of many sensor parameters and the quality of the image coordinate measurements of each sensor. The effect of those parameters must be determined to maximize the overall quality within the constraints imposed by the requirements of the application. The parameters affecting the accuracy of measurement, the test laboratory, and test results using intensity and range data, are presented. The configuration design of intensity and range sensors is discussed based on the results presented here and in two previous papers.
New technologies and techniques for data aquisition and processing allow to determine 3D positions of environmental objects, which meet the demands of highest accuracy asked in geodesy and surveying as well as the omnipresent call for low cost systems. This paper presents a new approach to automatic digital information acquisition using the kinematic surveying system for real time data capturing of GPS-, IMU-, and CCD-camera output and post-mission data processing. The absolute position of the moving vehicle within the global corrdinate system (WGS84) is obtained by combining data from a GPS receiver and an inertial measuring unit (strapdown IMU). For compensating the known errors of both sensors, additional sensing devices like odometer and barometer are introduced. Further improvement in position estimation is achieved by stereo photogrammetric measurements of known environmental objects, the so called landmarks. A digital stereo vision-system creates successive series of high resolution grayscale images, while a SVHS video system records a continous image sequence of the traveled road and the nearby surroundings. On its way to complete automatic object positioning, a semi-automatic procedure for image processing has been chosen. The human operator takes care of the image interpretation and the search for wanted objects, while the feature extraction process is completely controlled by software. Depending on the demands of the application, all visible objects can be extracted, classified and photogrammetrically positioned. The parameters and the images of the accumulated objects are indexed to the 3D-trajectory and saved in a GIS-database.
The development and testing of a new targeting system for combining videometric images with electronic total station measurements is described. The system uses specially designed interchangeable targets to integrate the measurements. This allows the higher global accuracy and range of a total station to link control points for the videometric images of critical areas. The videometery provides faster and more convenient data acquisition, and a combination of the two systems offers advantages over either method alone. Lab tests and calibration of the two systems and targets have demonstrated relative accuracies of 1/12000 to 1/35000 using a KODAK DCS-420C digital still camera and a Leica TC2002 electronic total station. The combined system was applied to an industrial application for a fertilizer plant near Red Deer, Alberta with a global precision of 0.1 mm in a plane parallel to the image plane, and 0.4 mm out of the plane. Due to the improved target system, the survey was accomplished using only three control points and three images.
In this paper the ESPRIT-III GLORE project is described. One aim of the prject is to create a turnkey system that takes as input hundreds or thousnads of aerial video images and outputs a digital ortho-mosaic and a digital elevation model of the covered area. This '3D-image- mosaic' is to be used by the forest industry in land-use planning and forest management. The mosaic is to be used as basis for the evaluation of forest resources. It is very difficult to use the separate video images, so the mosaic is essential. The digital elevation model is important in planning of roads and visualization. The '3D-image-mosiac' is to be made automatically using the methods of global object matching or global object reconstruction. In this case an object based approach is used in the matching. This kind of methods are mostly very computation intensive. Therefore a transputer based parallel computer is used. First, the ideas behind the system are described, including the use of GPS navigation data with the on-line digitization of the images on the aircraft. Then, some examples of the method using aerial photographs are shown.
A new automated inspection algorithm of industrials parts using a 3D laser range sensor is described. The input to the program is a tessellated representation of the part at a desired resolution saved in a neutral STL format and an unordered series of measurements produced by a 3D optical sensor. The output is a colored version of the model indicating the level of discrepancy between the measured points and the model. Using this coloring scheme, an operator or a robotic system can rapidly identify defective parts or monitor process drift on a production line. At the base of the method, a new robust correspondence algorithm which can find the rigid transformation between the tessellated model of the part and the measured points, is presented. This method is based on a least median square norm capable of a robustness of up to 50%. The robustness of the method is essential since one cannot guarantee that in practice, all the points in the measured set belong to the model. These types of algorithms are usually quite costly in computational complexity, but we will show that one can speed-up these algorithms by using the well-known iterative closest points algorithm and a multiresolution scheme based on voxels.
Reverse engineering is the process of creating a CAD model and a manufacturing database for an existing part or a prototype. This process is necessary in redesign of existing parts and for automated inspection. In this paper a unique approach to reverse engineering is proposed. Here the part to be modeled is viewed from two othogonal view points. One camera captures the top view of the object while the other camera captures the four side views by rotating the object in steps of 90 degrees. The images are processed and the orthographic views are created. A 3D line drawing of the object is then recreated by matching points in the orthographic views. This paper describes the developed feature extraction procedure, camera calibration method, and the matching technique. Sample results are also given.
A significant problem in 3D reconstruction of biological tissue from histological material is alignment of the individual sections. We are developing a method to determine the surface of the tissue prior to cryosectioning and then utilize that information to guide registration. Toward that end, we have developed a structured light techniuqe for imaging frozen rat brains. The imaging approach relies on a novel coding scheme for the projected light which is based on 2D perfect submaps. Perhaps submaps are r by v c-ary arrays in which every n by m c-ary submatrix is unique. This coding scheme offers two major advantages over previous structured light patterns critical in the present application. It permits rapid image capture and, because each subwindow is unique, is robust in the presence of partial occlusion. To examine the accuracy of this technique, we compare the points mapped using it to the surface produced by block-face imaging. In the later approach, the tissue block is imaged prior to collecting each of the tissue sections. Since the block can be accurately repositioned after each cutting stroke, reconstruction of the surface from the block-face images is straightforward.
This paper describes one of the industrial applications of our digital photogrammetric system VirtuoZo, namely a prototype system to collect 3D data from stereo-video pair sequences along a rail road track for clearance measurements. With the rapid developing of digital media such as charge-coupled-device (CCD) and digital video cameras, stereo images pairs can be captured in a much easier and faster way compared with traditional means. Digital photogrammetry can thus now be used in many new applications. However, with the geometry of CCD (or digital video) cameras different from the classic analogy metric camera, new relative orientation and epipolar image resampling algorithms have to be developed for these nonmetric cameras. An example of such a new application is given in this paper: a series of sequential stereo image pairs were captured by two digital cameras along a railway track from a moving rail platform, then relative orientation was done fully automatically by matching registering points in the two stereo scenes using a hierarchical relaxation image matching algorithm. Then, epipolar images are resampled from the original images by means of a relative linear transform, and finally a 3D data collection algorithm allows a user-friendly interface to the human operator for data capture on a SGI workstation under StereoView.
A realized concept for a geometry measurement system is presented to be applied near the production process. Here the measuring object is covered by a robot-guided camera from various positions. By applying photogrammetrical adjustment methods for the evaluation of the image measurements, a measurement uncertainty is realizable which is independent of the accuracy of the camera positioning. Moreover, an image evaluation based on a CAD model is presented, which is especially designed for the use of a moved camera.
This paper presents a successful implementation of a real time inspection system of plastic bottle closures. The closures are inspected at a rate of 20 per second or one every 50ms. The available time to inspect each closure forces the algorithms used to be relatively simple. Even though the algorithms are simple, they have to be robust to ensure a good result. Two boundary tracking algorithms were designed and implemented, one based on edge strength information and one based on threshold information. Both use prior knowledge about the closure. The results are sufficiently accurate to replace a human operator with the machine inspection system. A description of the system including the timing and hardware used is given. The results achieved using each of the algorithms are presented. This is followed by a brief discussion of problems associated with achieving highly accurate results using a simple algorithms.
3D reconstruction of highly textured surfaces like those found in roads, as well as unvegetated (rock-like) terrain is of major interest for applications like autonomous navigation, or the 3D modeling of terrain for mapping purposes. We describe a system for automatic modeling of such scenes. It is based on two frame CCD cameras, which are tightly attached to eachother to ensure constant relative orientation. One camera is used for the acquisition of photogrammetrically measure reference points, the other records the surface images. The system is moved from the first position to the next by an operator carrying it. Automatic calibration using the images acquired by the calibration camera permits the computation of exterior orientation parameters of the surface camera. A fast matching method providing dense disparities together with a robust reconstruction algorithm renders an accurate grid of 3D points. We also describe procedures to merge stereo reconstruction results from all images taken, and report on accuracy, computational complexity, and practical experience in a road engineering application.
There is great significance in measuring objects on-line. Usually the contact measurement is used for measuring objects. 3D coordinate measurer and laser measurer are accurate ways to measure objects. But only points are measured by these ways. And many parts are shadowed by other parts or can not be contacted by probe, and are difficult to measure. It is not possible to measure objects on-line. Digital image processing is a fast developing subject. On the basis of these images the date result of objects are calculated by that system on aerophotographic way. We developed a system for that. With that system we got the date results of turbine blades. The accuracy of two results, one gets from our system to another from 3D coordinate measurer, are compared and analyzed in this paper.
A stereo vision system has been designed to locate and track a dynamic object in real time. The system consists of two CCD cameras, a frame grabber and digital image processing software. The algorithm is based on the priciple of constructing a mathematical stereo model from two overlapping images while a dynamic object is passing through the scene. The stereo models are constructed at finite time intervals to provide a sequence of locations for the dynamic object. The size, shape, and behavior of the object in the scene, the precise position of CCD cameras and the performance of the system are fundamental parameters which should be carefully considered to achieve an appropriate precision and reliability. The system has two advantages. Firstly, the system is quickly able to recognize, detect, and track and object. Secondly, because fundamental photogrammetric equations (collinearity equations) have been used in this system, the points will be located precisely. This paper will explain the methodolgoy of the system, the problems and issues involved, and outline the results of experimentation with the system.
This paper discusses the acquisition method of digital image with CCD camera and tried to find a process of image enhancement in accordance with the characteristics of the CCD. And we also developed the method of transforming BIN image file to BMP file for the application on the windows program. Digital image was acquired suitable to coordinate measurment after image enhancement as histogram analysis to BMP image, equalization, brightness, contrast, noise elimination, and sharpening. Also we acquired pixel coordinate for target point on the object and tranformed to image coordinate. Next, we selected a part of automobile, executed bundle adjustment by method of this study and existing photogrammetric surveying, and consequently examined 3D accuracy and efficiency. In addition, we suggested 3D measurement method applicable to the whole range of industry by accomplishing fast and efficient modeling technique by means of digital photogrammetry.