PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper reviews the basic concepts behind laser cameras developed at the NRC. It emphasizes the critical elements of optimal design and the limitations related to the use of coherent light. It is shown, for example, that speckle noise is a fundamental limit for the position sensing of laser spot centroid. This has an impact on the choice of the position sensor geometry. Design guidelines are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cost and time reduction in control operations, large-scale threedimensional measurement, high accuracy in hazardous environment, immediate results : these are today the industriel requirements. Within this context, the development of measurement systems by optical methods is gaining momentum. In addition to being efficient and competitive, these systems are progressively and effectively adapted to the new industrial requirements. Thanks to their adaptability and the potential developments, these systems are today the main assets for industrial sectors, always anxious to guarantee the quality oftheir products. The existing technics are numerous, varied and often complementary : CAT, interferometry, photogrammetry, videogrammetry, holography,. .. In this paper, we propose to discuss about one of these technics : videogrammetry also called digital photogrammetry. After some general remarks about this booming technic, we suggest to study the invoived methodology and to illustrate the industrial aspect with practical applications and examples of results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of video imagery for photogrammetric tasks may be divided according to the task into on-line or off-line use. In Finland, the first successful on-line systems were installed for the position measuring of car bodies in Spring 1992. The on-line applications have grown interest within industrial production because they are operative and a payback period is thus countable. As it regards the off-line use, the situation is different and the efficiency in 3D data acquisition seems to play the decisive role. In the paper the different characteristics in performing the on-line and off-line videogrammetry are presented. The 3D measurements are exemplified with the applications. These cover both existing references in industrial `on-line' applications and ongoing development of `off-line' 3D object digitizing procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a 3D surface profile and displacement measurement system capable of micron level accuracy using moderately priced off-the-shelf equipment. A non-linear optimization based calibration system is presented. The calibration system determines the position and operating characteristics of the cameras as well as correcting for lens distortion. Also presented is a surface profile and displacement measurement system base on projections into space of subsets of the recorded images. This method provides information about both the location and orientation in space of the subset. The accuracy of the system is established through a series of experiments. The calibration is assessed and the results are expressed using several different error measurements including a new error measurement proposed by the authors. The baseline accuracy of the measurement system was determined through a series of profile and translation tests. The system is capable of measurements to an accuracy of 0.003 mm over a 14 mm X 18 mm field from a distance of 416 mm using a 512 X 480 CCD camera and a magnification factor of 27 pixels/mm. The system was also used to measure the bending of a circular plate under pressure loading. The experimental results are analyzed and compared with theoretical prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An integrated model-based approach for extracting blood vessels in MR images is presented. A Generalized Stochastic Tube model is used to capture both the shape of local tube-like object segments and the shape dynamics of global object trajectories. The blood flow within cross sections is explicitly modeled using a bivariate Gaussian density function that predicts the expected sensor measurement configuration. Experimental results on both synthetic data with different degrees of Gaussian noise and real MRA data demonstrated that integrating both shape and blood flow models yields accurate and robust performance even under noisy conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed and clinically tested a computer vision system capable of real time monitoring of the position of an oncology (cancer) patient undergoing radiation therapy. The system is able to report variations in patient setup from day to day, as well as patient motion during an individual treatment. The system consists of two CCD cameras mounted in the treatment room and focused on the treatment unit isocenter. The cameras are interfaced to a PC via a two channel video board. Special targets, placed on the patient surface are automatically recognized and extracted by our 3D vision software. The three coordinates of each target are determined using a triangulation algorithm. System accuracy, stability and reproducibility were tested in the laboratory as well as in the radiation therapy room. Beside accuracy, the system must ensure the highest reliability and safety in the actual application environment. In this paper we also report on the results of clinical testing performed on a total of 23 patients having various treatment sites and techniques. The system in its present configuration is capable of measuring multiple targets placed on the patient surface during radiation therapy. In the clinical environment the system has an accuracy and repeatability of better than 0.5 mm in Cartesian space over extended periods (> 1 month). The system can measure and report patient position in less than 5 seconds. Clinically we have found that the system can easily and accurately detect patient motion during treatment as well as variations in patient setup from day to day. A brief description of the system and detailed analysis of its performance in the laboratory and in the clinic are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the examination of mixing processes in turbulent flows a system based on a high-speed solid state camera has been implemented, which allows for the quasi-simultaneous acquisition of sequences of flow tomography voxel data. In these data, velocity fields are determined by 3D least squares matching. The first part of the paper will show a hardware configuration based on a highspeed solid state camera with a maximum frame rate of 500 images per second and a scanning laser lightsheet, which allows for the acquisition of flow tomography data sequences with a typical size of 256 X 256 X 50 voxels per volume dataset at a rate of 10 datasets per second. The quality of the data and some special problems of the highspeed camera will be discussed. In the second part of the paper the 3D implementation of least squares matching with a 12-parameter 3D affine transformation between voxel-patches of consecutive datasets will be described. In order to strengthen the matching in regions with insufficient local contrast the algorithm is combined with several geometric and radiometric constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-contact dynamic deformation monitoring (e.g. with a laser scanning system) is very useful in monitoring changes in alignment and changes in size and shape of coupled operating machines. If relative movements between coupled operating machines are large, excessive wear in the machines or unplanned shutdowns due to machinery failure will occur. The purpose of non-contact dynamic deformation monitoring is to identify the causes of large movements and point to remedial action that can be taken to prevent them. The laser scanning system is a laser-based 3D vision system. The system-technique is based on an auto- synchronized triangulation scanning scheme. The system provides accurate, fast, and reliable 3D measurements and can measure objects between 0.5 m to 100 m with a field of view of 40 degree(s) X 50 degree(s). The system is flexible in terms of providing control over the scanned area and depth. The system also provides the user with the intensity image in addition to the depth coded image. This paper reports on the preliminary testing of this system to monitor surface movements and target (point) movements. The monitoring resolution achieved for an operating motorized alignment test rig in the lab was 1 mm for surface movements and 0.50 m for target movements. Raw data manipulation, local calibration, and the method of relating measurements to control points will be discussed. Possibilities for improving the resolution and recommendations for future development will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An experimental measurement system for dimensional quality control is described in which laser radar based 3D coordinate measurements are guided to given points using information from video imagery. The system is controlled by a measurement model file that contains the measurement program and the nominal data of the interesting features in the target object. Because the real world objects differ from the nominal model due to, e.g., changes in their shape, dimensions and position, the measurement system must be adaptive. This adaptivity is provided by using video imagery for guiding the 3D coordinate measurements to the position indicated in the measurement program. At the moment the system is capable of finding points that are marked with circular tags and guiding the laser range finder beam into these points. The linearity and repeatability of the image analysis based feature finding system is better than 0.5 mm at 11 m. Tag finding in a 512 by 574 image takes ca. 2.5 second using a low cost frame grabber and a PC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic or semi-automatic systems for digital close-range photogrammetry are a very efficient and accurate tool for a large number of measuring tasks in industrial production processes. This presentation shows experiences and results of some pilot studies on the applicability of digital photogrammetric techniques in production and quality control conducted on a north-american shipyard. The main task was the dimensional check of sections of a ship's hull manufactured and equipped in a hall and to be fitted into it's location in the complete hull under construction in order to avoid expensive refitting work during the final assembly of the hull. An off-the-shelf high-resolution stillvideo camera Kodak DCS200 was found to be very useful for data acquisition; it proved to be an autonomous, flexible digital image acquisition system with a high accuracy potential. The items to be measured were discrete points targeted with retroreflective markers. Due to the relatively small number of targets to be measured and the high complexity of the scenes semi-automatic data processing was chosen. The results of the study were quite satisfactory: it could be shown that a system largely based on standardized hardware components is well suited for the tasks, and a relative accuracy of up to 1:75,000, which can be considered a good value under factory floor conditions, could be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the design and performance of an optical measurement system (OMS) which is used to measure the position of a magnetically suspended element as part of the feedback control system for the NASA Langley Large Gap Magnetic Suspension System (LGMSS). A new processing architecture which is to be implemented in the OMS to increase the rate at which position information is made available to the LGMSS controller is also discussed. The OMS consists of multiple linear charge-coupled device cameras which detect small infrared light emitting diode targets embedded in the surface of a magnetically suspended cylinder. The OMS estimates the position and attitude of the cylinder in six- degrees-of-freedom and supplies this information to the LGMSS control computer at a rate of 40 samples per second. Experiments have been run to evaluate the performance of the OMS. The accuracy of the OMS was evaluated using a static test model of the cylinder. Test results show that the one sigma (one standard deviation) in the OMS estimates of cylinder position and attitude are approximately +/- 0.001 inch in x and y, +/- 0.0005 inch in z, +/- 0.005 degree in pitch and yaw, and +/- 0.01 degree in roll.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time exposure photography, sometimes coupled with strobe illumination, is an accepted method for motion analysis that bypasses frame by frame analysis and re synthesis of data. Garden variety video cameras can now exploit this technique using a unique frame buffer that is a non integrating memory that compares incoming data with that already stored. The device continuously outputs an analog video signal of the stored contents which can then be redigitized and analyzed using conventional equipment. Historically, photographic time exposures have been used to record the displacement envelope of harmonically oscillating structures to show mode shape. Mode shape analysis is crucial, for example, in aeroelastic testing of wind tunnel models. Aerodynamic, inertial, and elastic forces can couple together leading to catastrophic failure of a poorly designed aircraft. This paper will explore the usefulness of the peak store device as a videometric tool and in particular discuss methods for analyzing a targeted vibrating plate using the `peak store' in conjunction with calibration methods familiar to the close-range videometry community. Results for the first three normal modes will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The remote and automatic inspection of the inside of pipes and tunnels is an important industrial application area. The main characteristics of the environment found in commonly used pipes such as sewers are: limitations on the camera spatial position; a large variety of surface features; a wide range of surface reflectivity due to the orientation of parts of the pipe, e.g. the joints; and many disturbances to the environment due for example to: mist, water spray, or hanging debris. The objective of this research is defect detection and classification; however, a first stage is the construction of a model of the pipe structure by pipe joint tracking. This paper describes work to exploit the knowledge of the environment to: build a model of the defects, reflectivity characteristics and pipe characteristics; develop appropriate methods for grouping the pipe joint features within each image from edge information; fit a pipe joint model (a circle, or connected arcs) to the grouped features; and to track these features in sequential images. Each stage in these processes has been analyzed to optimize the performance in terms of reliability and speed of operation. The methods that have been developed are described and results of robust pipe joint tracking over a large sequence of images are presented. The paper also presents results of experiments of applying several common edge detectors to images which have been corrupted by JPEG encoding and spatial sub-sampling. The subsequent robustness of a Hough based method for the detection of circular images features is also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With this paper we would like to present a vision system to classify empty PET-bottles inside the box. With this classification it becomes possible to convey only these boxes to the production, which contain a greater portion of usable bottles and therefore the following linear sorting becomes more efficient. The camera-system in the developed control-unit takes images from the top of the boxes and a special flashlight illuminates the passing boxes. The system- computer is a PC 486/33 including a frame grabber for the image acquisition. A light barrier synchronizes the image-processing-hardware, the camera system and the flashlight. The processing software extracts features of the different forms of bottles to classify the bottles. The complete image processing is done by the system-computer with a process speed of around 3600 boxes/h. The detection of the very small differences between the bottles is the main problem of the feature extraction inside the box. Beyond it, leftovers are possible in every bottle. Even these bottles are classified correctly, too. The integrated SPC (stored program control) controls the box-selector to convey only these boxes to the refilling-process containing a greater portion of usable bottles. A prototype of the control-unit is at work since February 1994.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional surveying techniques and the use of mechanical structures mounted on rolling stock are the current methods for measuring clearance around Queensland railway lines. A new method, described in this paper, is being developed for Queensland Rail by a consortium of three Brisbane companies. The project involves the merging of two technologies, both of which are themselves evolving rapidly. The first of these is Digital Photogrammetry which provides 3D information through the processing of stereo images. The second is the capture of digital images and the pre-processing and transmission of large quantities of video data in an industrial environment. The result is a Computerized Structure Clearance Measurement System which allows operators to make accurate measurements with reference to a clearance gauge profile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera systems with automated zoom lenses are inherently more useful than those with fixed- parameter lenses. Variable-parameter lenses enable us to produce better images by matching the camera's sensing characteristics to the conditions in a scene. They also allow us to make measurements by noting how the scene's image changes as the lens settings are varied. The reason variable-parameter lenses are not more commonly used in machine vision is that they are difficult to model for continuous range of lens settings. In this paper we present a methodology for producing accurate camera models for systems with automated, variable- parameter lenses. To demonstrate our methodology's effectiveness we applied it to produce an `adjustable,' perspective-projection camera model based on Tsai's fixed camera model. Our model was calibrated and tested on an automated zoom lens where it operated across continuous ranges of focus and zoom with an average error of less than 0.11 pixels between the predicted and measured positions of features in the image plane
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the problem of camera calibration based on 3D feature measurements. It occurs in industrial 3D measurement systems, as well as in autonomous navigation systems, where the estimation of motion parameters is required. We have selected the problem of extrinsic calibration (exterior orientation) of a camera that is looking at flat or almost flat surfaces (or terrain). This situation causes numerical and stability problems to many of the known calibration methods. To study the impact of flatness of the reference surface (or calibration target) on the calibration errors we have done a comparative study using sixteen available calibration procedures. The major emphasis was on robustness with respect to 3D measurement errors and sensitivity to flatness. A new calibration method is also investigated, which can be used independently of whether the calibration reference surface is flat, almost flat, or rugged.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the accuracy of rigid body pose estimation (measurement of attitude angles and position, i.e., six degrees of freedom, or six DOF) using imaging sensors. Different approaches are evaluated using an analytic model and actual measurements made on the video images of an airplane. The performance of the single-camera technique is compared with stereo methods. Single-camera pose estimation is preferred in a number of measurement and control applications because, in addition to requiring less hardware and processing resources, it simplifies the system setup and operation. Except for the camera-to-object distance, the single-camera accuracy is shown to be comparable with stereo techniques and viable for a number of applications. The analytic results are validated using measurements made on the airplane video sequence using a software specially developed for image-based six DOF estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to show how image points can be extracted accurately. We will restrict our search to specific points identified by corners, which are stable given a sequence. Our approach makes us of a model-based corner detector. It matches a part of the image containing a corner against a predefined corner model. Once the fitting is accomplished, the position of the corner in the image can be deduced by the knowledge of the corner position in the image. The validity of our approach has been proven with 4 independent tests. It is shown that the accuracy which can be achieved is 1/10th of a pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a review of a number of subpixel estimators classified as moment based, local modelling and reconstruction. Three algorithms are described in detail, one from each class. In the first, the basic centroid method is generalized so that it is applicable to a wider class of problems and the general formulation is applied to develop a subpixel ridge estimator. The second algorithm is a restricted polynomial model and is developed based on the assumption that an edge profile remains invariant in a local neighborhood. The third algorithm uses Gaussian interpolation to perform local image reconstruction. Simulations are performed to measure the performance of these three algorithms under ideal and noisy conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signalizing points of interest on the object to be measured is a reliable and common method of achieving optimum target location accuracy for many high precision measurement tasks. In photogrammetric metrology, images of the targets originate from photographs and CCD cameras. Regardless of whether the photographs are scanned or the digital images are captured directly, the overall accuracy of the technique is partly dependent on the precise and accurate location of the target images. However, it is often not clear which technique to choose for a particular task, or what are the significant sources of error. The research described in this paper describes aspects of target recognition, thresholding, and location. The results of a series of simulation experiments are used to analyze the performance of subpixel target location techniques such as: centroiding; Gaussian shape fitting; and ellipse fitting, under varying conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is common to use some form of targeting in close range photogrammetry as there are seldom enough points on the surface of an object with sufficient contrast. Targets which have been used include: light emitting diodes; black circles on a white background; retro-reflective film; projected laser beams; projected `white light' slides; feature encoded targets; and color targets. This paper discusses the characteristics of targets. In particular the established retro- reflective target and the promising projected laser target are considered as they both offer high signal-to-noise ratios together with optimum target sizes. The performance of the targets are analyzed by use of laboratory tests, for example: (1) a retro-reflective target was placed on a rotating mount with the center of the target located on the axis of rotation and the target monitored by a CCD camera under varying conditions; and (2) a laser target was analyzed by experiments which were designed to indicate the effect of speckle by moving a flat object in a direction perpendicular to the laser beam.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new multiscale edge detection algorithm. The algorithm is based on a new nonlinear filter, which produces a scale-space filtering analogous to Gaussian filtering but has several interesting properties such as viewpoint invariance and automatic edge preservation. From this multiscale representation, the algorithm uses a multidimensional morphological operator to compute the position of edges. A mathematical analysis of the algorithm and its efficient software implementation is discussed. Experimental results illustrating the use of the filter to detect multiscale depth and orientation discontinuities on range images and significant edges on intensity images are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The determination of relative pose between two range images, also called registration, is a ubiquitous problem in computer vision, for geometric model building as well as dimensional inspection. The method presented in this paper takes advantage of the ability of many active optical range sensors to record intensity or even color in addition to the range information. This information is used to improve the registration procedure by constraining potential matches between pairs of points based on a similarity measure derived from the intensity information. One difficulty in using the intensity information is its dependence on the measuring conditions such as distance and orientation. The intensity or color information must first be converted into a viewpoint-independent feature. This can be achieved by inverting an illumination model, by differential feature measurements or by simple clustering. Following that step, a robust iterative closest point method is then used to perform the pose determination. Using the intensity can help to speed up convergence or, in cases of remaining degrees of freedom (e.g. on images of a sphere), to additionally constrain the match. The paper will describe the algorithmic framework and provide examples using range-and-color images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Regularization theory, first developed to solve edge detection problems in computer vision, has been studied in this research in an attempt to obtain an optimal scale for Gaussian filter in smoothing head range data. In regularization theory, both accuracy and smoothness of the resultant data is considered. Based on regularization theory, Generalized Cross Validation is derived for 2D head range data smoothing. Preliminary results have shown it to be an efficient way to obtain an optimal scale of Gaussian filters according to the specific head range data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional vision techniques based on intensity data, such as the data produced by CCD cameras, cannot produce complete 3D measurements for object surfaces. Range sensors, such as laser scanners, do provide complete range data for visible surfaces; however, they may produce erroneous results on surface discontinuities such as edges. In most applications, measurements on all surfaces and edges are required to completely describe the geometric properties of the object, which means that intensity data alone or range data alone will not provide sufficiently complete or accurate information for these applications. The technique described in this paper uses a range sensor the simultaneously acquires perfectly registered range and intensity images. It can also integrate the range data with intensity data produced by a separate sensor. The range image is used to determine the shape of the object (surfaces) while the intensity image is used to extract edges and surface features such as targets. The two types of data are then integrated to utilize the best characteristics of each. Specifically, the objective of the integration is to provide highly accurate dimensional measurements on the edges and features. The sensor, its geometric model, the calibration procedure, the combined data approach, and some results of measurements on straight and circular edges (holes) are presented in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the problem of finding correspondences between target images in multiple views is studied. A 3D target matching algorithm is developed which can be used as a direct replacement for the epipolar constrained matching method without requiring precise camera parameters. Approximate camera parameters are iteratively refined by combining the matching procedure with the bundle adjustment method. Several techniques are discussed to improve the reliability and efficiency of the method using a 3D space constrained search technique for the matching of target images in multiple viewpoints. A globally consistent constrained search is developed in which pseudo target images are defined to overcome the problem of occluded targets. Hypothesis testing and heuristic method are also used to improve the efficiency and robustness of the matching process. An analysis of the methods used is given and a general algorithm is designed. The resulting algorithm is shown to successfully find the correspondence between targets using many viewpoints. Simulation trials and practical tests are performed to verify the reliability and efficiency of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nonlinear filtering technique for the preprocessing of very low contrast images has been applied to optical profilometry, as an attempt to improve the accuracy of the measurement of objects in harsh conditions. The technique is based on the application of a nonlinear architecture composed of linear Laplacian filters followed by quadratic filters which detect correlated elements. The above sequence of operators results in efficient highpass filtering, keeping at the same time the signal-to-noise ratio within acceptable limits. When applied to highly transparent or weakly diffusive surfaces, the preelaboration technique has largely improved the accuracy of the profilometer. In this paper the preelaboration technique is presented. In particular, the influence of the nonlinear image elaboration on the overall system performance is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents an investigation into 3D structure and motion estimation from image sequences. The concept of a variable dimension 3D Kalman filter is outlined in which the structure and motion parameters of two or more images are reconstructed at each time instant. The developed procedure aims at applications in visual navigation. For a motion unit of two images the length of the state vector is restricted to N X 3 coordinates of N tracked natural landmarks and to 12 motion parameters. Even though new points appear in the sequence with each new processed image, a similar number of points leave the field of view. Therefore the length of the state vector is approximately constant (varies only by a small number of points from image to image) and does not depend on the number of images in the sequence. From a navigational point of view this feature of the proposed procedure is most important. A quality check is reported comparing the structure and motion parameters of the presented procedure to the results of a simultaneous bundle adjustment. The results refer to an experiment in which an observer moves through a stationary but unknown environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The algorithm of the automated resolution estimation for on-board imaging videosystems (TV, thermal scanner, etc.) is described. The main benefits of the proposed method, as compared with the traditional visual method of the resolution estimation, are that the results do not depend on the experience of the observer and that the procedure of processing and analysis of the results is less time-consuming. An example of practical realization of the algorithm for the flight experiment data is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation techniques are widely used to match corresponding areas in stereo image pairs. This technique provides pixel correspondence that is required for the generation of 3D data. However correlation based approaches can provide false results and inaccurate matching due to noise and geometric and radiometric distortions in the stereo images. This paper is devoted to not well known outside Russia the Pytiyev morphological approach. The main idea of this approach is based on terms of the set topology theory and the projection on subspace created by the conditions of image changeability including radiometric distortions. The shape of image acquires quantitative mathematical description based on topological ideas. The specific correlational measure is built. This method can be effectively used in image matching and comparison tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developers of videometric systems must attend to the problems of image storage, retrieval and, for multi-station triangulation, the unambiguous correlation of images with appropriate epochs. For dynamic testing with multiple cameras, this problem is manifest. An `off-the- shelf' component two camera system was recently developed for measuring the six degree-of- freedom time histories of a free flight wind tunnel model. Vertical interval time codes (VITC) were used to correlate fields from each camera station which had been stored onto video cassette recorders (VCR). Subsequent use and development has emphasized the practicality of this approach. This paper discusses the image management technique used along with some details of the particular wind tunnel application. The utility of post-test processing of long sequences of VITC encoded imagery stored to VCR is established.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the actual required of a body-in-white visual inspection system, and according to features of two structured light sensors (stripe light and multi-stripe light sensors) used in the system, this paper proposes a global calibration method using the direct linear transformation based on describing mathematical models of two kinds of sensors in brief and the multi-visual inspection system calibration in detail. The accomplishment of the calibrating process is presented on available condition. We design a corresponding calibrating device, and solve the sighting problem between target and structured light. A series of experiments have been finished. The results of them prove that the global calibration method is feasible and practical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital stereophotogrammetric system is being developed to perform photogrammetric tasks with a minimum of cost and hardware operating complexity. The standard IBM PC-AT/486 is used as system processing unit. Two stereo vision modes are available: anagliphic and mirror. With the use of aerosurvey and spaceaerosurvey photographs of different projections, the problems of interior, relative and absolute orientation are solved in a rigorous and efficient way. On the future stage of system development digital information from satellites will be processed. Stereomeasurement operations can be executed both in manual and automatic way. The systems enables processing of digital images exceeding disk memory capacities, hence the images with pixel size of 5 - 10 mkm can be processed. Digital images areas under operation are stored on the disk beforehand. System software enables fast and easy access to any area of any image stored on the disk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many instances a new product design starts with a physical prototype. The CAD model is then extracted from the physical model. Moreover, there are many products that do not have an associated CAD model. To redesign or modify such a product, a CAD model should be available. The creation of a CAD model and the extraction of manufacturing information from a prototype or product is called Reverse Engineering. In general, reverse engineering is accomplished through three stages. These stages include part digitization data segmentation and surface modeling. Techniques for part digitizing are well established and commercial systems are available. The area that is less developed is to model the part from the cloud of points created by the digitizing systems. In this paper the emphasis is on data processing and CAD modeling for reverse engineering. Here a new method for developing a CAD model for an existing part is discussed. In this approach the cloud of points acquired from the part surface are segmented such that each segment represents a set of coordinate data belonging to a surface segment. The data is segmented using parameters derived from differential geometry. Accurate segmentation of the surfaces is achieved using a neural networks system that uses the value of defined surface parameters as input and identifies the surface segments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.