Three-dimensional surface point data is often useful for r000t control and inspection casks. Tne design of sensors for collecting this data involves many choices, with selections made on tne oasis of data rate, accuracy, field of view, safety, size, object properties, and the need for registered range and orightness data. A sensor has been designed with data rates of 30 kHz for orightness, 2 kHz for range in the scanning mode, and 250 Hz for range in tne random access mode. Range data accuracy is aoout .1A of tne field of view. The main components of the sensor are a low power laser (1 mW), a 1024 element COD linear array camera, a galvanometer scanner, and a special interface to a minicomputer. Alternative designs include stereo without using projected energy, white light projection, ultrasound, time-of-flignt measurements, multiple detectors, projecting planes versus oeams, two-dimensional array cameras and "position" sensors, and a variety of scanning mechanisms.
Three major approaches to pattern recognition, (1) template matching, (2) decision-theoretic approach, and (3) structural and syntactic approach, are briefly introduced. The application of these approaches to automatic visual inspection of manufactured products are then reviewed. A more general method for automatic visual inspection of IC chips is then proposed. Several practical examples are included for illustration.
We propose a representation of two-dimensional shape and contour that has local support. A contour and region is segmented into subparts with associated partly specified descriptions that are iteratively refined by the descriptions of adjacent subparts in a process of local frame propagation. Curved shapes are described in terms of smoothed local symmetries that combine features of generalized cylinders and the Symmetric Axis Transform. Shape descriptions, together with a local potential function, are used to determine grasp points for a parallel jaw gripper.
A three dimensional representation of a part is reconstructed from multiple camera views. Measurements are then collected from this three dimensional data and can be used to detect faults in the manufacturing process. The manufacturing faults are detected as visual abnormalities in the final parts. These abnormalities correspond to error conditions in earlier phases of manufacturing and could represent equipment failure, equipment wear or the use of a faulty control algorithm. A gage station which collects visual information is discussed. The algorithm which converts the visual information into a three dimensional representation of the part is presented and compared to other similar reconstruction strategies. Once the data have been collected and reconstructed, measurements are taken and correlated with possible error conditions. New correlations between the part measurements and manufacturing errors can be added to the control system as problems occur. For example, hammer wear in an open-die forge can be discovered by measuring the length of a work piece after it was struck. Along with each casual relationship there is a suggested course of action which is intended to be an immediate remedy for the error condition. In the forge example, a simple corrective action would be to move the hammers closer together to account for their wear. This makes it possible for the overall system to approach immunity to catastrophic errors while minimizing the number of defective parts.
This paper describes a new technique for modeling 3D objects that is applicable to recog-nition tasks in advanced automation. Objects are represented in terms of canonic 2D models which can be used to determine the identity, location and orientation of an unknown object. The reduction in dimensionality is achieved by factoring the space of all possible perspective projections of an object into a set of characteristic views, where each such view defines a characteristic-view domain within which all projections are topologically identical and related by a linear transformation. The characteristic views of an object can then be hierarchically structured for efficient classification. The line-junction labelling constraints are used to match a characteristic view to a given unknown-object projection, and determination of the unknown-object projection-to-characteristic view transformation then provides information about the identity as well as the location and orientation of the object.
A method for finding planar faces in sets of range data images is described. First, the object points are extracted from the range data image (or images). These are then organized in a k-d tree using the x, y and z values as keys. From the k-d tree a spatial proximity graph is efficiently constructed. Finally, a set of seed points for a possible face is chosen, and the spatial proximity graph is used to guide the search for neighboring points lying on this face.
By viewing samples of good and bad parts (gray-scale picture data), the computer vision system forms a model that allows it to distinguish between good and bad parts. The model contains a set of feature points which are used for determining part location and a set of inspection tests which apply to pertinent regions of the part. The location of parts relative to the camera can vary as the system brings all subsequent images into registration with the first image. Using registered picture data, the system determines regions that differ significantly in intensity or edge distribution between good and bad parts. The system then determines (for each of these regions) test parameters which allow it to distinguish between good parts and bad parts with an arbitrary number of defects.
Development of an experimental robotic cell using machine vision sensory-feedback is reported. The cell contains a computer-controlled electric robot, a computer-controlled vision system, and a computer-controlled fabrication device. All of these components are under over-all control of a cell supervisory computer. Parts are recognized and their location determined to an accuracy of 5 mm for robot acquisition. Point-of-work vision to an accuracy of 0.2 mm is used for workpiece positioning and in-process inspection. Each of the individual cell components is controlled by dedicated software. Application programs which run in the cell supervisory computer exercise complete control over each of the individual cell components. Part-specific configuration data for vision and manipulation are accessed from a disc file or higher-level computer by the cell computer. The cell is being applied experimentally to drilling and riveting of aircraft parts.
This paper describes a noncontact visual profile sensing method and an implementation for the inspection of metal turbine parts for visual surface defects. The inspection system consists of two sensors, each with its associated hardware preprocessor, and a manipulation system. Four general purpose computers control the entire system and perform high level part quality decisions. One cpu serves as the system executive, two serve as high-level image pro-cessors in concert with the two sensors and preprocessors, and one cpu serves as a controller for the fifteen servo axes which comprise the manipulator system. This paper will describe the sensor and the preprocessor hardware. It provides additional information to a previous paper describing only the profile sensing scheme. For ease of reading, some previous material is repeated from the reference.1
Features which are easily extracted from an image are often at too low a level to be unambiguously matched to features of a model. However, if an elementary feature ei has some structure, only a limited number of transformations Tij can match it to similar model features mj. By extracting a set of features ei,i=l,...,n the transformation parameter space can be populated with a number of potential transformations Tij, i=1,...,n ; j=1,...,k. Clustering in this parameter space derives a transformation T that is supported by a large amount of local matching evidence. Simple clustering techniques are described for handling combined rotation and translation. Results are reported using the clustering technique with edge features and circular neighborhood features to acquire 2D objects.
An installation of an automatic vision inspection system to inspect sheet metal parts illustrates the mundane engineering required for successful application of this advanced technology in a factory setting.
We use the facet model to accomplish step edge detection. The essence of the facet model is that any analysis made on the basis of the pixel values in some neighborhood has its final authoritative interpretation relative to the underlying grey tone intensity surface of which the neighborhood pixel values are observed noisy samples. Pixels which are part of regions have simple grey tone intensity surfaces over their areas. Pixels which have an edge in them have complex grey tone intensity surfaces over their areas. Specifically, an edge moves through a pixel if and only if there is some point in the pixel's area having a zero crossing of the second directional derivative taken in the direction of a non-zero gradient at the pixel's center. To determine whether or not a pixel should be marked as a step edge pixel, its underlying grey tone intensity surface must be estimated on the basis of the pixels in its neighborhood. For this, we use a functional form consisting of a linear combination of the tensor products of discrete orthogonal polynomials of up to degree three. The appropriate directional derivatives are easily computed from this kind of a function. Upon comparing the performance of this zero crossing of second directional derivative operator with Prewitt gradient operator and the Marrâ€"Hildreth zero crossing of Laplacian operator, we find that it is the best performer and is followed by the Prewitt gradient operator. The Marr-Hildreth zero-crossing of Laplacian operator performs the worst.
The article describes the facilities and work of the UWIST machine-vision laboratory. Among the topics discussed are: (a) general items (the room, shrouding, object manipulation, lighting and optics) (b) image sensors (television, flying spot scanners and solid-state devices) (c) image processing (a large versatile system and a smaller portable system) (d) applications studies to illustrate the kind of work currently being undertaken in the laboratory.
A position recognition system composed of a television camera, a special purpose real-time image processor, and a general purpose microcomputer is developed. This system realizes a local pattern matching technique utilizing several local portions of an image as standard patterns. New resampling hardware enables the standard patterns to be matched against local patterns with a basic 8x8 or 12x12 window size and its multiples. Also, a high reliability recognition scheme with redundant matching sequence is programmed in the microcomputer. These features provide a cost effective device with wide application possibility. This technique is beeing successfully applied to automatic assembly systems for almost all types of semiconductor products.
An automatic visual inspection technique for solder joints on printed circuit boards has been developed. To detect the solder joint shape, structured light is used as the optical system and a simple waveform processing is applied to the extracted shape for judging the shape of a solder joint.
A new technique for inspecting extremely hign density CCU wafers (292 x 492 picture elements) for surface defects is proposed. The differential operator and the thresholding are employed to extract pattern boundary with a high. S/N ratio, after which a set of the pixels in a pattern area compared with the corresponding set of pixels in the adjacent repeating pattern. The defect is identified as the number of unmatching pixels by the above processing, adding the reduction technique for associated noise of the boundary. The practical system including the hardware logic of those processing ones it possible to inspect a 3-inch CCD wafer in 80 minutes.
The next generation of industrial vision systems will require orders of magnitude more computation. We discuss the kinds of processing required and the architectures available. We argue that architectures based upon the ICL Distributed Array Processor are particularly well suited to this task and superior to other possible architectures for various reasons. We discuss the algorithms involved in a second generation system, and give an example in DAP Fortran of a simple first generation system.
There are six degrees of freedom that define the position and orientation of any object relative to a robot gripper. All six need to be determined for the robot to grasp the object in a uniquely specified manner. A robot vision system under development at the National Bureau of Standards is designed to measure all six of these degrees of freedom using two frames of video data taken sequentially from the same camera position. The system employs structured light techniques; in the first frame, the scene is illuminated by two parallel planes of light, and in the second frame by a point source of light.
Unidirectional oblique illumination is studied in this paper as a useful illumination source for machine vision modules. For many practical instances, it allows the vision nodule to compose a binary picture which conveys useful information of the object in the scene otherwise unattainable from simple thresholdinq of front-lighted pictures.
It has been estimated that processor speeds on the order of 1 to 100 billion operations per second will be required to solve some of the current problems in computer vision. This paper overviews the use of parallel processing techniques for various vision tasks using a parallel processing computer architecture known as PASM (partitionable SIMD/MIMD machine). PASM is a large-scale multimicroprocessor system being designed for image processing and pattern recognition. It can be dynamically reconfigured to operate as one or more independent SIMD (single instruction stream-multiple data stream) and/or MIMD (multiple instruction stream-multiple data stream) machines. This paper begins with a discussion of the computational capabilities required for com-puter vision. It is then shown how parallel processing, and in particular PASM, can be used to meet these needs.
Due to the many advancements in microprocessor technology, the application of real-time, on-line pattern recognition systems is now cost effective and practical. One major application of such a system is for the quality on-line inspection of labels in a manufacturing environment after they have been applied to a product container. The system, as described in this paper, performs a sophisticated inspection that had previously been done by human inspectors with a very low degree of accuracy. Utilizing state of the art electronics, this visual inspection system can process two containers/sec. with an accuracy rate of 96.2% and a false reject rate of .2%. Such a system is also adaptable to robot vision for inspection and quality control evaluations.
A system that hopes to accomplish robust automatic Image Understanding (IU) clearly needs techniques which can reason about what it is seeing and what it is trying to see. One need for reasoning is found in the ubiquitous operator parameter problem: the problem of setting parameters for low-level image computations. This paper explores this problem in the context of a complete IU system and presents an approach by which parameters are tuned with respect to high level features. Adaptive Operators accomplish this tuning by using 1) evaluations derived from specific object level features that rate the "goodness" of a segmented region; and 2) search methods which adaptively search for segmentation parameters producing the best region according to the evaluation. Adaptive Operators represent reasoning which ties together both the high-level, generally a-priori knowledge of an IU system and the lowest, purely computational level. Examples of Adaptive Operators and their performance are given in the context of a system to recognize buildings in aerial photographs.