This paper addresses the major issues in the application of machine-vision technology as an in- process, noncontact gauging tool on the factory floor. This includes the following: (1) How can machine vision be used for an on-line gauging application? (2) What is means by the terms resolution, accuracy, repeatability, and tolerance? How do they relate to each other? (3) What are the imaging concerns, part-edge qualities, back lighting versus front lighting, TV camera usage, optical consideration, and telecentric lens (constant magnification lens)? (4) What is meant by subpixel resolution and accuracy? How can it be achieved? (5) What are the practical limits for on-line industrial machine vision gauging application? The above questions are put into perspective by examining existing applications of on-line machine-vision gauging systems.
Three-dimensional gaging involves the measurement and mapping of 3-D surfaces. Gaging accuracy depends on measurement accuracy in the images, image scale, and stereo geometry. Multiple cameras are often needed to provide adequate stereoscopic coverage of the object. This paper reports on an automatic 3-D gaging system that is being developed at the University of Illinois at Urbana-Champaign. A portable 3-D target field, consisting of 198 targets each identified with a bar code, is used to determine the interior and exterior orientation parameters of each camera. Image-processing algorithms have been developed to identify conjugate image points in stereo pair of images, and the object-space coordinates of these points are computed by stereo intersection. Software has also been developed for analysis and data editing.
Industrial measurement applications include many tasks which are time critical. While faster measurement techniques are always desirable, there are also certain levels of accuracy and repeatability which must be maintained. Video-imaging technology has progressed to a point where acceptable accuracy levels for many applications should be attainable. This paper discusses experiences with measurement of a carefully calibrated targeted surface. Repeatability and accuracy are examined for a VIDEK MEGAPLUS camera using Imagraph image-processing hardware in a PC-BUS system. The target field is calibrated to nearly 1 part in 300,000 using the Geodetic Services, Inc. (GSI) STARS photogrammetric system. Computer simulations suggested potential accuracies of 1 part in 30,000 (with 1/20 pixel resolution) to 1 part in 80,000 (with 1/50th pixel resolution). Actual results on the order of 1 part in 60,000 were obtained. While speed is not central to the tests conducted, some expectations for speed are discussed. Measurements presented were completed in less than 1 hour per measurement cycle. Limitations and advantages of the tested hardware are noted with some thoughts on future implications.
In industrial dimensional inspection and quality control there is an increasing need for fast and automatic high-accuracy measurement systems. For vision systems to match these requirements all system components have to be tuned carefully. A key role in such a system is played by the measurement algorithm. This paper demonstrates how the area-based multi- photo geometrical constrained (MPGC) matching algorithm can be modified for the highly accurate measurement of object edges. It can be expected that this algorithm allows the measurement of nontargeted but well-defined object features with a relative accuracy of 1:25000.
Pixel density limits the resolvable detail and edge-location precision of any camera. The new cameras address this problem by increasing the number of pixels, or in the case of the Kontron Progress 3000 invented by Reimer Lenz small pixels are moved behind the lens to 'fill in' the image area. This paper discusses a similar approach to the Lenz approach except that the entire camera is moved. An image is taken and then the camera is moved to a new location, from a fraction of a pixel to a full pixel from its first location, and a second image is taken. This is repeated until sufficient data or repetition of data is obtained. The task now becomes to reassemble the image in the correct location to get an image of enhanced precision. Moving the camera gives rise to a number of problems which must then be dealt with.
Structured light is a flexible method which is often used for the measurement of surfaces without natural texture. A basic difficulty herein is the solution of the correspondence problem, which often leads to ambiguities due to high spatial frequencies of the projected patterns or discontinuities in the object. The method presented here is based on the projection of a very dense dot pattern and three or more images of the object rather than a stereo pair, and it offers a reliable establishment of correspondences without requiring any approximate values or an initial match established by a human operator like some other systems do. This paper gives an overview of the hardware setup and the chain of processing; results are shown from deformation measurements of a carbon wing panel under load and from the determination of the surface of a model car.
Recent results obtained from a stereoscopic-vision system incorporating line-scan sensors are described. The research forms a part of the continuing program of work in both human- and machine-vision systems carried out a Nottingham Polytechnic. Line-scan sensors have been used extensively in shop-floor environments, for applications as diverse as pattern matching in the textile industry to quality inspection of printed circuit boards. However, all of this work has involved the use of a single line-scan device in a two-dimensional role. It is the author's intention to show that a logical progression of this previous research is to construct a stereoscopic sensor using line-scan devices and so enable three-dimensional coordinate information to be obtained from a moving object volume.
A significant proportion of the disabled population has seating requirements that simply are not addressed adequately by the available range of standard commercial wheelchair products. Attempts at addressing the seating needs of this group typically have been carried out in major healthcare facilities. The usual process has been to make major modifications and/or additions to standard wheelchair componentry. Generally, each client would require a unique set of modifications to the wheelchair, in essence resulting in a seat system custom-made for the individual. Such modification procedures are expensive and time consuming and, depending on complexity, can typically result in waiting periods of up to one year or more before delivery of the finished seating product. In an attempt to reduce cost and delivery time of such seating products, about six years ago Otto Bock Orthopedic Industries of Canada Ltd. introduced the Modular Orthotic Seating System (MOSS). The system consists of extruded aluminum seat and back frames designed to accept a suite of specifically contoured, molded polyurethane seat and back cushions, and a range of accessories and supportive devices. The frames are subsequently mounted to a wheelbase. The frame system comes in a variety of widths and features variable seat depth, back height, and seat-to-back angle. With these stock components the MOSS system provides a cost- and time-effective way of delivering a high degree of customized fit for the moderately to severely disabled client.
A methodology for the calibration of a CCD low-cost camera mounted on a CMM machine is described. Several kinds of geometric distortion are taken into account in the camera model. Five distortion parameters are introduced modeling both radial and non-radial effects. A two- phase methodology is developed for the parameter estimation. In the first phase, a closed-form solution is reached for the intrinsic and the extrinsic parameters neglecting distortion effects. In the second phase, an iterative procedure estimates separately the distortion and the non- distortion parameters, to avoid harmful interactions. The main novelty of the solution is the use of only linear systems, avoiding any non-linear optimization process. Low-level image processing used for calibration images is described. Experimental results and comparisons with more simplified camera models and estimation strategies are presented.
In 1986, a space-qualified version of the real-time photogrammetry system invented by Pinkney and Perratt in 1978 was developed under contract to the Canadian Astronaut Program by Spar Aerospace and Leigh Instruments Ltd. as a space-flight experiment called the Space Vision System (SVS). Originally scheduled to fly in March of 1987, the SVS is now slated to fly on the shuttle in September of 1992 as part of a series of experiments called Canex-2. Over a period of three days the functionality of the SVS will be verified through a series of proximity operations with a test satellite called the Canadian Target Assembly (CTA). This hardware and the flight experiment are briefly described in a previous paper by Pinkney et.al. One aspect of flight preparation that is crucial to the success of the experiment is the calibration procedure utilized by the SVS. On-orbit conditions present many difficulties that are not typical of the laboratory. Extreme temperatures cause the shape of the cargo bay, which is the reference coordinate system for the photogrammetry platform, to thermally deform every 45 minutes. The pan/tilt mechanism for the current shuttle closed-circuit television (CCTV) cameras was never intended to be used for photogrammetry. Experience gained in 1984 on the Canex-1 mission showed that the pan/tilt mechanisms could be stalled by the mechanical stiffness of their own power wires, and because their angles are only command encoded the pan/tilt information available to the operator in the aft flight deck was generally suspect. This paper deals with the SVS calibration techniques and the procedures associated with the calibration of the current shuttle cameras and the photogrammetry platform, both in preparation for flight and on orbit. It has been shown in recent simulations that this self-consistent approach contributes to the position and orientation accuracies that would allow an operator who uses SVS displays to control the shuttle's remote manipulator system (RMS), or 'Canadarm', with substantially more precision than is available at present.
This article describes methods to quantify the signal transfer characteristics of CCD cameras. The Fresnel zone plate is a nearly ideal tool for the direct determination of the modulation transfer function and of chromatic aberrations of the optical system. It is used to measure the properties of the newly developed CCD camera ProgRes 3000 using piezo-controlled microscanning to digitize images with a variable resolution from 500 X 580 up to 3000 X 2320 picture elements for each of the three color channels red, green, and blue. The geometrical accuracy of the piezo positioners is measured with algorithms similar to correlation. Using special test patterns, a measurement accuracy of less than 1/1000 of the sensor element pitch is reached, whereas the positioning accuracy is about 1/100 of a sensor element. In order to fully exploit the radiometrical dynamic range of the sensor used within the ProgRes 3000, experiments have been performed with Peltier cooling and pixel- synchronous 12-bit analog to digital conversion. A dynamic range of 2500:1 is reached, sufficient for working in low-light level environments or for digitizing high-contrast color slides or x-ray images.
Changes in the shape of cars due to impact in crash tests are determined from the deformation vectors of points located in specific positions on the car. The coordinates of these points are measured before as well as after the test. Costs and measurement time can be significantly reduced by automated dimensional inspection with digital photogrammetry. This paper reports on a pilot test in which the measurement of a car prepared for a crash test was performed under practical conditions. It was shown that an accuracy of 1 mm in each coordinate axis within a measurement volume of 5 X 2 X 2 m3 can be achieved under factory- floor conditions with low-cost CCD cameras. A high level of automation and robustness was demonstrated. The measurements were performed in a very short time with model-driven techniques.
An approach in the analysis of deformation structural behavior using the method of multispectral analysis has been developed by the authors. This approach is suitable for concrete dams where data are acquired through close-range photogrammetric means. This analysis is based on simultaneous adjustment of data obtained from different spectral ranges. The approach consisted of three basic processes: preliminary identification of deformation models, estimation of the deformation and displacement models, estimation of the deformation and displacement parameters, and multispectral analysis of the deformation models. A numerical example is given using data obtained from WILD P32 phototheodolite of 2 epochs of observation of the Kland Gates concrete dam deformation behavior study.
A method is described for generation of an image from measurements of weak electromagnetic signals in a region close to their source. A computer-assisted contactless test system consists of an array of sensors exposed to the reactive electric and magnetic fields surrounding the printed circuit-board assembly. In a reference plane located a distance small in comparison to the wavelength a probe antenna scans the fields generated by its source; the signal from the sensor is acquired in the frequency domain by a spectrum analyzer and then transferred to a controlling computer. Measurements are made on a regularly spaced grid in a plane parallel to the board. A noncoherent estimate of pixel value is defined. This estimate is computed for each node of a grid. A practical implementation of the method and a simple fault detection technique are presented.
The increased use of digital-imaging techniques in medical diagnosis has generated voluminous amounts of data. Data volume and its management is of particular importance when dealing with three-dimensional image derivatives. Subsequent use of the three- dimensional image data necessitates a compact and versatile data structure which permits some of the geometric operations and queries to be efficiently accomplished. This paper outlines such a compressed and flexible data structure for use with three-dimensional image data sets. Some examples from stereotactic neurosurgical applications are given to demonstrate the appropriateness of the proposed data structure.
A performance evaluation approach for a vision metrology system has been developed for the application of automated dimensional inspection. The approach is designed so that parameters and conditions, when the system functions as planned or fails, are clear and well understood. Statistical stability of the measured object properties as well as the overall system error are analyzed.
For many high-level vision tasks, such as scene interpretation or object recognition, it is advantageous to know the visible surface of the object space. An approach based on scale space techniques and matching in object space is described. In every discrete step in the scale space images are matched which are warped to the surface obtained in the previous step. This corresponds conceptually to matching in object space, and it helps reduce the foreshortening problems that are associated with any matching method. The 3-D positions of the matched points form a sparse set that needs to be densified to obtain the surface. On every level the matching results are analyzed and a hypothesis about breaklines and occlusions is formulated. The surface is found by interpolating the 3-D points found by matching. The interpolation stops at boundaries of suspected breaklines and occlusions. An independent analysis of the surface normals may confirm or reject the hypothesis. Yet another independent clue about breaklines or occlusions stems from edges. At the final step the warped images become orthophotos and the matching vector vanishes for all points. The theoretical description of the model is followed by experimental results.
This paper proposes a new approach to solve the well-known matching or correspondence problem. This approach incorporates the image space based matching techniques with the general knowledge of the scenes. The low-level processing (edge detection, segmentation) and candidate matching are performed in image space, while the final matching is determined in object space by a relaxation procedure which combines the result of candidate matching, general geometric constraints of scenes (GGCS) and other constraints of image matching. The innovative features of our approach lie in back-projecting (back tracing) the line pairs from candidate matching into the object (scene) space, and combining all the constraints in an unified process. The concept of "figure continuity" in the image space is substituted by a new concept of "General Geometric Constraints of Scenes (GGCS)", which is object space based and has much more scope than "figure continuity". Under the new scheme, the final matching becomes a consistent labeling problem. An experiment is used to illustrate our approach.