An approach to the correlation of images with reference image templates is discussed. The approach is based upon the use of image false contours generated via digital quantization coupled with special purpose digital processing. It is shown that false contours within the image as well as appropriate image transformation and reference template preprocessing can significantly speed up the digital correlation process. The image transformation, discussed from the viewpoint of Green's theorem, shows that area correlation calculations can be performed using line integration in the transformed image. A timing analysis for the processing approach is presented using a general purpose 32 bit microprocessor common to computer workstations. Implications with regard to rapid location and classification of image objects are discussed.
The knowledge of surface roughness profiles or roughness parameters are necessary for a great variety of industrial production processes. In most cases at this time a in-process measurement is not possible due to different boundary conditions given by these processes. It is very clear that the classic mechanical method does not allow in-line measurements. Therefore optical methods were investigated by a large couple of R&D groups. The state of the art is that some optical roughness measurement systems you can buy commercially. But there is a general disadvantage of these systems because they do not cover all the possible applications as well as necessary. In our paper will give first a short overview on the field of optical roughness measurement methods. Second we will introduce you into a special application, the in-process measurement of cold rolled steel. Third we will explain the basic principles and limitations of our new method and we will show you the results of our first industrial field tests.
An image processing system that incorporated some retinal properties was investigated for the processing of two dimensional images. The system was required to carry out basic image processing tasks such as edge detection. A new filtering technique was deduced from the physiological findings on the distribution of the receptive fields of the retinal ganglion cells. This filtering technique was then incorporated in the design of an image processing system in, which the spatial resolution increased linearly towards the centre of, the image. The design, was based on a discrete polar distribution of processing areas on an inhomogeneous triangular sampling grid. This resulted in a highly localized processing system which simplified the development of the higher image processing tasks such asboundary following. The retinal image processing system was simulated on a VAX 11/750. The computational cost of conducting operations such as edge detection, boundary detection and boundary following, using the designed system, was evaluated and compared with that of the conventional image processing system.
The median filter is one widely-used member of the class of ranked-intensity filters. Such filters are useful in signal processing, and particularly in machine vision, because they effectively remove an important and difficult-to-deal-with kind of noise, called variously "spike" noise or "salt-and-pepper" noise. By their nature, these filters are nonlinear. Filters involving linear operations, such as convolution - a weighted summation of the pixels in a neighborhood - can be done incrementally, and hence lend themselves to implementation on fast "pipelined" architectures. Two special cases of ranked-intensity filters, the "maximum" and "minimum" filters, can also be implemented incrementally. In contrast, the general ranked-intensity filter requires a sorting process over all pixels in the neighborhood. The usual "bubble sort" algorithm requires n passes and n*(n-1)/2 separate comparisons to fully determine the rank of all pixels in an n-pixel neighborhood. More efficient algorithms are known, but these also require multiple passes. As a result, software implementations of ranked-intensity filters are slow, and the hardware implementations now becoming available require special boards or chips devoted to that function alone. This paper presents improved sorting algorithms for ranked-intensity filters, and shows various practical techniques for implementation of these algorithms on existing fast multi-purpose image processing hardware. While not as fast as a dedicated hardware implementation, the resulting system is considerably faster than a software implementation, and yet it retains the general-purpose character of software systems. This makes it useful for laboratory algorithm development systems and for near-real-time applications. The same algorithms could also be used to design more efficient custom hardware or provide faster software implementations.
An image tracking system using log-polar mapping is investigated using a priori knowledge of the target geometry. The shape of the mapped target's leading edge contains information regarding the orientation and location of the target. The projection of the target image for various view angles introduces distortions in the mapped image. These distortions provide the information necessary for adjusting the camera pointing vector such that a perpendicular view angle is maintained. Once a perpendicular view angle is obtained, centering on the target may be accomplished by observing the leading edge of the mapped image. Distortions observed in the leading edge, introduced by offset pointing errors, are used to generate an error signal for camera translation. A leading edge free of distortion indicates that centering has been achieved. Simulation results are presented for various target geometries.
Phase-locked detection is a very useful instrumentation technique. It can be used whenever the desired signal is phase-coherent with another reference signal. Very frequently, the reference signal is, or can be derived from, the periodic external stimulus which is responsible for the signal in the first place. Typically the use of a lock-in amplifier can improve the signal-to-noise ratio by several orders of magnitude. We describe a successful implementation of an infrared imaging system in which the images are phase-locked with the periodic thermal radiation used as source of illumination and we also report the application of this phase-locked infrared imaging technique to the detection of microcracks in Cu foils deposited on polyimide substrates.
A fully integrated surface inspection system which uses a canputer generated solid model of a part is described. Inspection of the physical part is performed by an active stereo imaging system, one of the most practical vision methods for making measure-faits cn a smooth mamfactumd part. A mathematical treatment of the triangulation process using pinhole camera models is developed and yields an estimate of the measurement accuracy available from the system. The pan angle distance betueen camera and projector, as well as location and slope of the surface to be inspected, determines the minimum width and mininun depth of a detectable flaw. This system provides a user friendly surface inspection method for ccuputer modeled manufactured parts.
This paper reviews high-precision object location and concentricity measurement using standard TV cameras and a combination of resampling and least-squares model fitting. The computationally intensive resampling calculations were implemented via real-time hardware, using a separable 8 x 8 point approximation to a sinc(x) function. A software approach was used for the least-squares estimation. Particular emphasis is placed on the effects of lighting, part appearance, and environmental factors which ultimately determine system performance.
A new field of applications for image processing systems is emerging that makes use of a three-dimensional scene analysis. Different methods have been devised, active and passive ones /6/, and for the automated visual inspection in more general terms the stereoscopic imaging seems ap-propriate. A new system for flexible inspection tasks is described that makes use of this three-dimensional imaging method.
This paper analyzes the unrolled problem of the rotary object and subsequently proposes a visual system in the inspection of bearing rollers. In section two,the author first reviews several generally used unrolling techniques of ball and cylinder as well as other rotary objects by single point scanning technique and then discusses the situation by CCD or SSPDA array aimiDg to improve efficiency. In section three, a visual inspection system of the surface defects of the bearing roller is presented. The test system consists of a photodiode array,a roller unrolling mechanism, a single board computer and a VAX-11.The first two parts are controlled by the single board computer to accomplish surface unrolling procedure, then the surface information is transmitted to VAX-11 (or IBM-PC/AT ) for further processing and classification. The results demonstrate that the system can effectively identify the surface defects of the bearing roller such as crack,nick,scratch,rust with minimum crack width abr'if ''20-30 μm and so on.
Laser scanning systems are well established in the world of fast industrial in-process quality inspection systems. The materials inspected by laser scanning systems are e.g. "endless" sheets of steel, paper, textile, film or foils. The web width varies from 50 mm up to 5000 mm or more. The web speed depends strongly on the production process and can reach several hundred meters per minute. The continuous data flow in one of different channels of the optical receiving system exceeds ten Megapixels/sec. Therefore it is clear that the electronic evaluation system has to process these data streams in real time and no image storage is possible. But sometimes (e.g. first installation of the system, change of the defect classification) it would be very helpful to have the possibility for a visual look on the original, i.e. not processed sensor data. At first we show the principle set up of a standard laser scanning system. Then we will introduce a large image memory especially designed for the needs of high-speed inspection sensors. This image memory co-operates with the standard on-line evaluation electronics and provides therefore an easy comparison between processed and non-processed data. We will discuss the basic system structure and we will show the first industrial results.
This paper presents a new approach to image feature vector classification based on the Cerebellar Model Arithmetic Computer (CMAC) neural network proposed by Albus. This approach promises advantages both over traditional methods for feature vector classification and over other neural network based classifiers. One advantage is that the generalization properties inherent in the network allow the formation of highly nonlinear decision boundaries, and allow multiple disjoint regions of feature space to be defined in the same class. A second advantage is that the computation time required for network training and for vector classification is greatly reduced relative to other nonlinear classification techniques. Results from several classification experiments are presented, including the investigation of the effects of noise on classifier performance, and the learning of rotational classification invariance using feature vectors deliberately chosen to be highly sensitive to object rotation. Capabilities and limitations of this method of feature vector classification are discussed.
This paper describes a hybrid architecture called Kiwivision II which is currently under development at DSIR in New Zealand. Kiwivision I is a modular pipelined architecture, developed by one of the authors (CCB), which is primarily suitable for low-level image processing operations. Kiwivision II is an enhanced architecture which will combine a commercially available pipelined system and a reconfigurable multitransputer network into a hybrid architecture suitable for a broad range of vision applications.
In manufacturing industry, visual inspection plays a vital role in quality control. Traditionally, this is performed manually, leading to human errors and a time consuming process. There is a major need for a reliable, low cost system which can keep tip with production speed and enhance Quality Assurance. This project involves the development of a low cost automatic visual inspection system for manufacturing industry. The steps to provision of the system are discussed. A program evaluating the performance of the developed system is presented in detail. This work has found that the system is capable of performing many 2D visual inspection tasks in manufacturing industry, at speeds of up to 15 samples per second to within a repeatability of 0.01 mm. The system described is now at the stage of development which is allowing implementation in the Pharmaceutical industries, printing business, cable manufacture and sheet metal shops.
Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.
Automated visual inspection promises to play an important role in the factory of the future. A prototype automatic computer vision inspection system was developed for the Quick Turnaround Cell (QTC) at Purdue University. The objective of the QTC project is to integrate design, process planning, cell control, and inspection functions into a manufacturing system that quickly produces parts using little operator knowledge and intervention. This paper focuses on the vision inspection module of the QTC project. To achieve a truly flexible automated visual inspection system, an interface between computer vision processes and CAD databases is an essential step. A design-feature based representation which includes dimensions and tolerances of the part is introduced as the part specification. A 3-D boundary representation CAD model is generated from the high level description. An interface system that understands the geometric shape of the part based on the CAD model generates a vision data base which serves as a front end to the inspection planning system. This planning system automatically generates inspection and recognition procedures from the design data. The recognition planning subsystem uses rules to select the important vision features from the given CAD data base, generates a list of simultaneously visible features, and suggests appropriate matching constraints. The inspection planning subsystem interprets each engineering specification of the part and provides proper inspection procedures. The on-line inspection subsystem executes programs based on the planning results and returns information about the part based on all the dimensions which are measured to subpixel accuracy. Thus, after the design cycle, parts can be throughly inspected with no technical decisions or programming required. Finally, results of experiments produced by the current implementation of the system are illustrated.
An automatic printed circuit board (PCB) inspection system called PI/1, is described in this paper. The system can inspect PCB artwork and inner layers to find trace faults such as open circuits, short circuits, pin holes, fine lines and narrow line spaces. PI/1 uses a line scan camera with 1024 pixels resolution to take images. The whole image can be obtained by the help of a scanning mechanism in the X and the Y directions. Continuous pictures can be taken without blurring, and the resolution is between 1/2 to 2 mils. The inspection method used in PI/1 is a type of design rule checking. It does not require a reference image and can tolerate changes in the inspection environment due to illumination and PCB positioning. The inspection algorithm has been implemented into a special hardware and the inspection speed is about 4 x 106 pixels/sec.
A new technique for automated optical inspection of printed wiring boards (PWBs) has been developed. It uses black line sensing and a radial matching algorithm. The sensing method uses black line illumination and a CCD line sensor. The PWB is brightly illuminated except for the area under observation. Because the PWB substrate is translucent, copper patterns are sensed as shadows and their color and roughness do not affect the sensing. The inspection algorithm describes the copper pattern with binary codes. The pattern width is measured radially in four directions from the pattern centers. A combination of length and direction is encoded in a 12-bit binary code. Codes are assembled and make up a dictionary. The method can be used to inspect many kinds of patterns by changing the contents of the dictionary. An inspection system using this technique has a 5-μm resolution and can inspect a 490 x 540 mm area in 5 minutes.
A near-real-time scanning laser microscope has been developed to assist in the characterization of crystal wafers used to produce integrated optical devices. This instrument is employed in an experimental program to correlate crystal anomalies with integrated device performance. A combination of polarization and masking techniques is used to generate images of higher contrast ratio and larger useful field of view than obtainable with conventional microscopy techniques. The raster-scanning beam design allows rapid image formation, which speeds the inspection operation. The addition of confocal optics increases the flexibility of the instrument and expands the useful applications to include biological and integrated structure characterization problems. Automation techniques are used to minimize the time required to inspect a given sample. A computer system prepares summarizing data displays that contain the key parameters describing the sample.
Conventional fabric inspection systems can detect defects of plain cloth, but not defects of some patterned cloth. Also their maintenance is rather difficult, since a laser scanning technique is commonly employed. Some conventional systems use an electronic scanning technique with line sensors (one-dimensional sensors). But because they use only a binary image processing technique, adapting themselves to changes in cloth brightness is difficult. In this paper, we describe the structure of an automated fabric inspection system and image processing algorithms that solve the above problems, and show some examples of defect detection.
The system features are as follows:
(1) Two-dimensional patterns are processed, thus not only soiled plain cloth, but also dyeing defects of cloth with polka dots, striped and checked patterns can be detected.
(2) Even if the cloth brightness changes, dirt on the cloth can be detected correctly, because the gray level image processing technique is employed.
(3) The detection algorithms are based on calculating an average or a standard deviation of the features such as brightness, shapes or sizes of the inspected objects.
(4) Since the image processors save the images of the defects data in their image memories the system can get the necessary data such as shapes and sizes of the defects.
Micropropagation is a technique used in horticulture for generating a monoclonal colony of plants. A tiny plantlet is cut into several parts, each of which is then replanted. At the moment, the cutting is performed manually. Automating this task would have significant economic benefits. A robot designed to dissect plants would need to be equipped with intelligent visual sensing. This article is concerned with the image acquisition and processing techniques which such a machine might use. A program, which can calculate where to cut a plant with an "open" structure, is presented. This is expressed in the ProVision language, which is described in another article presented at this conference. (Article 1002-65)
This paper illustrates an industrial application of vision processing in which potatoes are sorted according to their size and shape at speeds of up to 40 objects per second. The result is a multi-processing approach built around the VME bus. A hardware unit has been designed and constructed to encode the boundary of the potatoes, to reducing the amount of data to be processed. A master 68000 processor is used to control this unit and to handle data transfers along the bus. Boundary data is passed to one of three 68010 slave processors each responsible for a line of potatoes across a conveyor belt. The slave processors calculate attributes such as shape, size and estimated weight of each potato and the master processor uses this data to operate the sorting mechanism. The system has been interfaced with a commercial grading machine and performance trials are now in progress.
Software-based general-purpose development languages such as SUSIE, AUTOVIEW, VCS, and QC VISION have proven to be very valuable in the interactive development of image processing and feature extraction algorithms for industrial inspection. These systems are extremely versatile, but since they are typically implemented using a single microprocessor, they are very slow on some often-used operations. When such a system is expanded to include recursion and high-level symbolic extraction and manipulation functions, as required by the ProVision development environment, the slow operation becomes intolerable. This paper describes a vision system which uses only commercially-available real-time image processing boards and which executes most of the AUTOVIEW operations in one frame time and most of the remaining operations in two or three frame times. Also included are other important operations which are often available only on special-purpose systems. Examples: connected component analysis, convolution with large arbitrarily-specified kernels, and binary morphology with large arbitrarily-specified structuring elements. These more complex operations typically execute in about 1/3 to 1/2 second. The speedups provided by this vision system make possible a very fast implementation of the ProVision environment, allowing the interactive development of complex high-level algorithms with the convenience formerly available only on development systems for low-level applications.
A programmable command language is described which supports the development of real-time signal and image processing applications. Initially, this language is being targeted for the Datacube Max-Video modular hardware. Command languages are extremely effective for simulating signal and image processing applications, as they reduce the amount of detail the designer has to cope with, and still allow for the representation of arbitrarily complex designs. However, they have found limited applicability in real-time system development because of their inefficient execution and inability to control real-time hardware. This shortcoming typically necessitates a costly manual translation to faster languages for optimum performance (e.g., C). A command language that supports real-time implementation eliminates the need for this translation. This paper describes the features of such a language. The language includes commands for accessing the entire functionality of each specific hardware module, and commands to manipulate data structures and implement program control constructs. The commands can be executed in either an interactive single-step mode, used for off-line debugging, experimentation, and simulation, or grouped together to define a new command which can then be compiled for real-time execution. By means of representative examples, the utility of this language for programming real-time applications will be discussed and contrasted with other available methods.
A pipeline based connected components labeling architecture is described; the algorithm (an extension of Rosenfeld et al. (1966)) and architecture were verified by software simulation. The transitive closure label equivalence process is performed by a content addressable memory. The scheme takes full advantage of the concurrent memory operations provided by content addressable memory, and performs the connected components labeling in only two pipeline frames, independent of the complexity of component shapes in the input image. The connected components labeling module can work in conjunction with the existing feature and moment extraction hardware. The pipeline based architecture allows other image processing operations to be performed in the same pipeline preceding the connected components labeling module. Thus, the connected components operations effectively take no additional operation time. This simple architecture should be low cost and easy to implement in hardware.
The recent availability of user-configurable families of board-level processing modules has lead to an increase in use of pipelined architectures for vision applications. While some board families contain a wide range of special-purpose modules, it is not always cost effective to dedicate individual modules to specific parts of a vision algorithm. Rather, it is sometimes expedient to make the most of a smaller number of more general-purpose modules, even if this is at the expense of processing speed. This paper describes the implementation of a range of common and useful vision algorithms on a general-purpose pipelined architecture called Kiwivision. With little modification, the same algorithms could be transferred to other pipelined systems.
The Inmos transputer family is specifically designed for multi-processor systems and thus forms an attractive basis for image processing. This paper discusses the implementation of existing algorithms for convolution and distance transformation, and introduces a new parallel algorithm for the formation of the convex hull. The mapping of these algorithms onto transputer arrays is discussed.
The regularity and local neighbourhood interdependence of picture data and the repetitive nature of many feature extraction algorithms may be usefully exploited in the design of specialised computer architectures for image processing at the pixel level. However, the features detected in the image will vary in type, number, position and size. The irregularity of this feature data prevents it from being easily partitioned. Also, at subsequent "intermediate" processing levels, various feature extraction, grouping and measurement algorithms will be employed. These are often more complex than low-level operations, and may be broken down into concurrently operating sub-processes. A more flexible multiprocessor architecture is therefore required, on to which a variety of algorithms can be mapped. This paper describes an augmented tree-structured MIMD processor network for intermediate level image processing. The Inmos Transputer has been chosen as the basic architectural building block. The programming and operation of the proposed architecture is illustrated using a Hough transform algorithm and a connected region finding routine.
In image processing applications general programmability and the execution speed have traditionally been contradictory; the vast amounts of data to be processed require either massive parallellism or special hardware for efficient execution. Neither of the structures are user friendly or general in programmability with reasonable programming effort and/or reasonable price.
A new versatile programmable parallel VLSI architecture, called Instruction Systolic Arrays (ISA), is presented. The ISA retains all the advantages of conventional systolic arrays (such as regularity, modularity and parallelism at a fine level of granularity), but is also able to execute many different types of algorithms. The versatility of the ISA means that it can handle real-time variations in algorithmic calls and makes it suitable for use in adaptable real-time systems.