The VSH (video-scan head) scan engines have been designed for real-time point scanned image capture systems. The contrast produced by a point-scanned system can exceed those produced by CCD systems for objects such as scratches or low-profile edges. Intended applications range from semiconductor inspection to fluorescent microscopy to biological diagnostics. The capabilities spanned by the VSH-4 (4000 lines/second) and VSH-8 (8000 lines/second) engines are frames up to 1024 by 1024 pixels and frame rates to 100 frames per second. Frame size (zooms to 1/10th full-frame), user-defined lines-per-frame, frame rate, and pan moves in the frame scan direction are supported. To assess the performance of these devices an imaging system was built and the outputs i.e., detector or bit-mapped images, were analyzed and compared to specific criteria. This imaging system is described and measurements including image linearity, pixel stability, and system MTF are presented. Discussions of how scanning systems impact overall imaging performance along with some of the practical lessons learned from interfacing the VSH to optics and a commercial frame- grabber are included.
Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.
A reclaimer is used to dig raw material from a pile and transfer it to the blast furnaces in a steel making company. We propose a range finding vision system consisting of global and local range finders to fully automate the reclaimer. A global range sensor attached to the top of the reclaimer enables scanning more than 270 degrees and detecting a three dimensional profile of a pile. The sensor uses Ladar containing a range finder and one axis scan mirror. We added a motor to rotate Ladar for another axis scanning. A height map is obtained from the acquired range data by geometric transformation. Linear interpolation is applied between neighboring range data pixels because the initial height map is represented as a group of sparsely shaped points. By thresholding and edge following, we can calculate the optimal job path which avoids overload and maximizes digging efficiency. A local range finder attached at the end of the boom detects range data between the pile and bucket. The world coordinates are computed by three-dimensional translation and rotation of local range data. The local range data is used for renewing the height map after picking up some part of the pile. In this way, we obtain pile management and reclaimer automation.
An architecture for surface analysis of continuous cast aluminum strip is described. The data volume to be processed has forced up the development of a high-parallel architecture for high- speed image processing. An especially suitable lighting system has been developed for defect enhancing in metallic surfaces. A special effort has been put in the design of the defect detection algorithm to reach two main objectives: robustness and low processing time. These goals have been achieved combining a local analysis together with data interpretation based on syntactical analysis that has allowed us to avoid morphological analysis. Defect classification is accomplished by means of rule-based systems along with data-based classifiers. The use of clustering techniques is discussed to perform partitions in Rn by SOM, divergency methods to reduce the feature vector applied to the data-based classifiers. The combination of techniques inside a hybrid system leads to near 100% classification success.
Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.
Barcode systems are used to mark commodities, articles and products with price and article numbers. The advantage of the barcode systems is the safe and rapid availability of the information about the product. The size of the barcode depends on the used barcode system and the resolution of the barcode scanner. Nevertheless, there is a strong correlation between the information content and the length of the barcode. To increase the information content, new 2D-barcode systems like CodaBlock or PDF-417 are introduced. In this paper we present a different way to increase the information content of a barcode and we would like to introduce the color coded barcode. The new color coded barcode is created by offset printing of the three colored barcodes, each barcode with different information. Therefore, three times more information content can be accommodated in the area of a black printed barcode. This kind of color coding is usable in case of the standard 1D- and 2D-barcodes. We developed two reading devices for the color coded barcodes. First, there is a vision based system, consisting of a standard color camera and a PC-based color frame grabber. Omnidirectional barcode decoding is possible with this reading device. Second, a bi-directional handscanner was developed. Both systems use a color separation process to separate the color image of the barcodes into three independent grayscale images. In the case of the handscanner the image consists of one line only. After the color separation the three grayscale barcodes can be decoded with standard image processing methods. In principle, the color coded barcode can be used everywhere instead of the standard barcode. Typical applications with the color coded barcodes are found in the medicine technique, stock running and identification of electronic modules.
In this paper we present a complete 3-D object recognition system. The task of the system is the separation of different plastic salvage for recycling of goods' packages. In Germany separation is done by hand. This hand separation is expensive and gains bad results, caused by the work circumstances. The system holds the following characteristics: (1) Viewer centered object representation -- the used extended surface normal image (ESNI) model is a new and promising representative of this approach. (2) Photometric Stereo Method (PSM) -- we have integrated a specular high light elimination and we have developed a method, that combines dense gradient information and a line drawing of the object. (3) An adaptive hierarchical indexing of geometric features is used as a first matching step. Thereby an online adaptation by integrating the sensor accuracy is performed. (4) A main feature of the system is the 3-D rotation invariance of the unique base face orientation achieved by PSM. A color indexing and an intelligent template matching method are introduced to receive the recognition fine-tuning. Both methods are powerful but assume a fixed 3-D orientation of the objects. Because of the achieved (well-defined) base face orientation this restriction is overcome. We show results of experiments with real, deformed and dirty goods' packages.
A modified edge-based segmentation algorithm specially designed for mechanical parts is proposed. The technique used is based on a concept of 3-views of engineering drawings and is a partial parallel algorithm. Mechanical parts considered here are composed of planes, cylindrical and spherical surfaces. At first, a set of critical points is extracted from each row and column of range image by one dimensional curves' segmentation technique. Edge linking process is performed on the map of critical points by morphological dilation, thinning and edge tracking. After that, a connected component labeling procedure is done and all pixels belonging to the same 4-connected region are assigned a unique label. An efficient run-length implementation of local table method is used to do the connected components' analysis. Finally, a robust least squares surface fitting is employed for each label to accommodate the error of previous steps, and outliers are discarded according to errors. Experiments are presented for numerous scenes of both real and synthetic range images of mechanical parts including concave and convex surfaces, noiseless and noisy. The results show that the proposed one dimensional critical point locating method segments the range image of mechanical parts fast and accurately.
In 2D intensity image, the problems often cause wrong object recognition such as: los of depth information, shadows, overlapping objects. The main advantages of range image are the acquisition of depth information, which makes the separation easy, but it has blind areas; the boundary with special direction cannot be found. Using both intensity image and range image to do 3D object recognition can make full use of the advantages to avoid the shortcomings of each kind of image. Edge detection, grouping and linking, image segmentation, and outline determination are performed in both images. If the match between the two results is good, either one may be accepted, but if the match error is big, we need to confirm which one will be accepted or rejected. After examining each step, the criteria are used to form the outline of the object, then, the type, size, position, and orientation of each object can be calculated and determined.
This paper presents a general method for achieving an automatic vision system for the quantification of visual aspects in the textile field. The process begins in fact with the decomposition of the image into a structure and a texture image. This operation is achieved by filtering in the Fourier domain, following an additive model of decomposition with disconnected masks. Some new quantifiers are then computed for texture images and a segmentation is only done if necessary. A new method is introduced as well by using localized pyramids, called local pyramids, centered on each of the relevant parts and no more critical for particularly elongated objects. The results are also efficient on more than 200 images, where the automatic conation is in accordance with the expert evaluations.
Constant and consistent quality levels in the manufacturing industry increasingly require automatic inspection. This paper describes a vision system for leather inspection based upon visual textural properties of the material surface. As visual appearances of both leather and defects exhibit a wide range of variations due to original skin characteristics, curing processes and defect causes, location and classification of defective areas become hard tasks. This paper describes a method for separating the oriented structures of defects from normal leather, a background not homogeneous in color, thickness, brightness and finally in wrinkledness. The first step requires the evaluation of the orientation field from the image of the leather. Such a field associates to each point of the image a 2D vector having as direction the dominant local orientation of gradient vectors and the length proportional to their coherence evaluated in a neighborhood of fixed size. The second step analyzes such a vector flow field by projecting it on a set of basis vectors (elementary texture vectors) spanning the vector space where the vector fields associated to the defects can be defined. The coefficients of these projections are the parameters by means of which both detection and classification can be performed. Since the set of basis vectors is neither orthogonal nor complete, the projection requires the definition of a global optimization criteria that has been chosen to be the minimum difference between the original flow field and the vector field obtained as a linear combination of the basis vectors using the estimated coefficients. This optimization step is performed through a neural network initialized to recognize a limited number of patterns (corresponding to the basis vectors). This second step estimates the parameter vector in each point of the original image. Both leather without defects and defects can be characterized in terms of coefficient vectors making it possible to devise a filter process that detects any abnormal part of leather. The resulting system does not depend on kind, dimension and color of defects and uses only local information (it can therefore be implemented in a parallel way for dealing with large pieces of leather). Finally it works also on different materials: it has been used successfully on wood and ferromagnetic materials.
The visual inspection is a technique of non destructive control that analyzes from pixel images conformity of a product that may present some manufacturing defects. The diversity of products to inspect implies specific method and algorithms. That leads to a dedicated design of the inspection system. Our research in the design of an inspection system based on artificial vision approaches the definition of a methodological framework for the design. We also approach aided design to determine the sequences of image processing allowing the defect detection. We propose a design framework based on the different phases leading to the conceptual model of the IAV system (inspection based on artificial vision). The aided design is envisaged under two aspects. The first one concerns products with finite dimension and determined form for which an image processing sequence planner can bring solutions. The second one concerns products with characteristic texture where it is needed to define a method for defect detection. This method will have to be applicable in many cases. Thus for inspection of plan product with texture, we propose a method based on the spectral analysis. This method uses the fact that defects produce significant modifications on the energy spectrum. It is based on the construction of an optimal spatial filter which improves contrast defect. This filter can be built automatically and/or interactively by a human operator. Several defects can be detected simultaneously by a combination of filters. To determine global image attributes, the most discriminant on the filtered image, the analysis in the main components can be used.
In order to segment images involving different micro/macro textures as Brodatz's textures or textile surfaces, we propose to use the co-occurrence feature in conjunction with a split and merge process. Our process can be viewed as a two-stage algorithm. In the first stage, a 'pixel-based' learning procedure characterizes all the studied textures. In the second stage a 'region-based' labeling procedure assigns pixels to different classes of textures. At each texture corresponds a co-occurrence matrix which can be described thanks to shape parameters or thanks to statistical parameters. We have used two measures from the co-occurrence matrix to discriminate each class of equivalent textures. The first one is a statistical measure which evaluates the local contrast. The second one is based on the convex hull of the co-occurrence matrix in order to describe its shape. The perimeter measure and the contrast measure have been selected not only because they are relevant texture descriptors but also because they are most robust to window size changes and to scale changes. From these features we have defined decision rules to assign each texture under study to its nearest class. We have especially used the Haussdorf metric to discriminate each texture from the other according to its shape.
A prototype digital image profilometer has been constructed to measure washboarding of corrugated cardboard. The profilometer consists of a diode laser, collimating optics, a grating and CCD camera. The laser beam is projected through the grating to produce straight bars that illuminate the sample at 90 degrees to the undulations. The deformation of the bars projected onto the sample, tilted at 75 degrees to the camera, is analyzed using Fourier analysis to produce a surface profile of the sample. A series of 1-D Fourier transforms are calculated from the intensity profile for successive scan lines at right angles to the bars projected onto the surface. The average depth profile for each scan line is then derived, after phase unwrapping, from the phase of the dominant frequency of the spatial frequency spectrum. The profilometer can reliably measure washboarding to a depth resolution of less than 10 microns over an area of 20 cm by 20 cm in less than 4 seconds on a 486DX2/66 computer.
Recognition of three-dimensional (3-D) elastic volumes is a computationally expensive process since it is performed involving not only the entire object and model surfaces but also their deformed versions. This paper describes a computationally inexpensive matching method for elastic volumes using only their cross-sections. In the proposed algorithm, it is assumed that object and model surfaces of unit volume are standing on the x-y base plane and scanned along the z-axis parallel to the x-y base plane. The resultant slices are represented by a stack of pictures with the values 1 and 0 corresponding to object and background areas. The object regions in these images are translated to the origin of the x-y plane and their major and minor axes are aligned with the x and y coordinate axes. The object region boundaries are detected by a set of simple 0-to-1 and 1-to-0 transition templates. The gradient vector is computed along the boundary points and matching is performed in the direction of the gradient vectors by computing the Euclidean distance between the object boundary points and the corresponding points in the model cross section boundary. The distances measured in various slices are averaged and the minimum average distance is used to identify the model best matching to the object being processed. The method is tested using computer generated surfaces of undeformed shapes with successful matching results.
This paper describes the technical standard today and the invention of our new system. We give an overview about the possibilities of inspection and measurement inside hollows and drillings. A lot of systems exist with several pros and cons. We discuss the different parameters and designs. A special application of our system is the inspection of internal screw threads. Most of the existing measurement systems evaluate the distance between their sensorheads and the reflecting surface out of the intensity of the reflection. But this signal is dependent on the reflection-index of the surface. Normally there is no homogeneous surface. Our new system measures the distance between the sensorhead and the internal surface. We prefer a triangulation technique with a folded beam path. All electrical and optical components are outside the hollow. Only two static plane mirrors mounted on the same support will be inside the sensor-head.
For dimensional inspection, the information on what to inspect usually is first obtained from the model. That will include geometric dimensions and tolerances (GD&Ts) as well as topological information about the model. Once items to be inspected are identified, the next step is inspection planning. Generally, there are several inspection features in one object. Using a vision system, we can see several features in one image. However, the features are not always inspectable because of the verification resolution (in other words, tolerance) constraint. To completely inspect the desired items (called 'inspection features'), one needs an algorithm for finding the optimum view sequence. In this paper, we discuss how to find the best view and optimum view sequence for inspection taking the desired verification resolution into consideration. To find the best view of an object, we use the number of inspectable features and verification resolution as parameters. We define the admissibility as an average resolution difference for every inspectable feature between the desired resolution and the calculated verification resolution in the selected image. Generally speaking, if a view has higher admissibility, the inspectable features in that view have a better verification resolution. Thus, if a view has more inspectable features, admissibility will be smaller. A feature visibility checking algorithm for view planning for 3-D optical image sensing is developed using a Z-buffer algorithm. Since the models used in this paper are approximated by triangular surfaces to obtain geometric and topological information from the CAD drawing, inputs for the algorithms are triangular surfaces. The result is output information about the number of visible surfaces and visible surface area with respect to a defined camera viewpoint. Both the theoretical development and practical application results are presented in this paper.
Wire-bonding in IC assembly process involves making a physical connection between the IC 'die' and the 'lead' by bonding wires between the two. Inspection of wire-bond quality is a' highly labor-intensive process and currently efforts are being made to automate it. This paper presents the results of a research conducted into developing a comprehensive automated wire- bond visual inspection system that is capable of performing final accept/reject inspection, providing on-line process feedback, and assisting in process validation. The proposed inspection system consists of the inspection of the bond on a bond pad, the bond on a lead and the inter-connecting wire between a bond pad and its corresponding lead. The algorithms are based on simple and easily extractable features that ensure achieving the desired accuracy and speed. A novel but simple illumination system is proposed to obtain the images of the inter- connecting wires. The proposed system is validated using several state-of-the-art IC samples. This work is sponsored by the Ministry of Science Technology and Environment, Malaysia and Intel Technology Pvt. Ltd., Malaysia.
Many vision problems are solved using knowledge-based approaches. The conventional knowledge-based systems use domain experts to generate the initial rules and their membership functions, and then by trial and error refine the rules a\nd membership functions to optimize the final system's performance. However, it would be difficult for human experts to examine all the input-output data in complex vision applications to find and tune the rules and functions within the system. Printed circuit board inspection is one such complex vision application. Our research introduces the application of fuzzy logic in printed circuit board inspection. The system presented here is highly modular and can handle most of the defects simultaneously with the same approach and is significantly faster compared to the existing approaches. This paper addresses three of the major components of the system: the first phase is the segmentation of the printed circuit board images into basic sub-patterns, the second is the learning phase, and finally the third component is the verification/inspection phase. The paper finally concludes with the experimental results.
The method presented in this paper aims at finding objects by segmentation of a multi-spectral (color) image in cases of incomplete knowledge of spectral features of objects' types. The method does not require probability densities of all classes to be known, and at the same time does incorporate the information available. The segmentation is performed in 2 stages. First, a non-parametric extended kNN classification algorithm is applied. It provides estimations of the a posteriori probabilities of every class, including the unknown one (which actually consists of all unknown beforehand classes), for every pixel of the image. The second stage results in a segmented image, containing both objects with spectral characteristics known in advance and objects that composed the unknown class at the first stage. It is obtained by an extended region merging algorithm, where the merging criterium combines a posteriori probability estimates from the first stage with similarity/homogeneity of spectral feature vectors. The method is especially useful when the inspection of big and largely 'unknown' streams of objects must be performed. The example considered in the paper concerns segmentation of real images of printed circuit boards in order to find different electronic components.
The approach presented in this work combines the high-speed nature of pixel-based processing with a standard feature-based classifier to obtain a fast, robust identification algorithm for artillery ammunition. The algorithm uses the Sobel kernel to estimate the vertical intensity gradient of an electronic image of a projectile's circumference. This operation is followed with a directed Hough transform at a theta of 0 degrees, resulting in a one-dimensional vector representing the magnitude and location of horizontal attributes. This sequence of operations generates a compact description of the attributes of interest which can be computed at high speed, has no threshold-based parameters, and is robust to degraded images. In the classification stage a fixed-length feature vector is generated by sampling the Hough vector at the spatial locations included in the union of attribute locations from each possible projectile type. The advantages of generating a feature set in this manner are that no high-level algorithms are necessary to detect the spatial location of attributes and the feature vector is compact. Features generated using this method have been used with a Mahalanobis distance, nearest mean classifier for the successful demonstration of a proof-of-concept system that automatically identifies 155 mm projectiles.
A hybrid image processing system which automatically separates lean tissues from the beef cut surface image and generates the lean tissue contour has been developed. Because of the inhomogeneous distribution and fuzzy pattern of fat and lean tissues on the beef cut, conventional image segmentation and contour generation algorithms suffer from heavy computing, algorithm complexness, and even poor robustness. The proposed system utilizes an artificial neural network to enhance the robustness of processing. The system is composed of three procedures such as pre-network, network based lean tissue segmentation and post- network procedure. At the pre-network stage, gray level images of beef cuts were segmented and resized appropriate to the network inputs. Features such as fat and bone were enhanced and the enhanced input image was converted to the grid pattern image, whose grid was formed as 4 by 4 pixel size. At the network stage, the normalized gray value of each grid image was taken as the network input. Pre-trained network generated the grid image output of the isolated lean tissue. A sequence of post-network processing was followed to obtain the detailed contour of the lean tissue. The training scheme of the network and separating performance were presented and analyzed. The developed hybrid system shows the feasibility of the human like robust object segmentation and contour generation for the complex fuzzy and irregular image.
This paper describes a novel automated inspection process for tempered safety glass. The system is geared toward the European Community (EC) import regulations which are based on fragment count and dimensions in a fractured glass sample. The automation of this test presents two key challenges: image acquisition, and robust particle segmentation. The image acquisition must perform well both for clear and opaque glass. Opaque regions of glass are common in the American auto industry due to painted styling or adhesives (e.g. defroster cables). The system presented uses a multiple light source, reflected light imaging technique, rather than transmitted light imaging which is often used in manual versions of this inspection test. Segmentation of the glass fragments in the resulting images must produce clean and completely connected crack lines in order to compute the correct particle count. Processing must therefore be robust with respect to noise in the imaging process such as dust and glint on the glass. The system presented takes advantage of mathematical morphology algorithms, in particular the watershed algorithm, to perform robust preprocessing and segmentation. Example images and image segmentation results are shown for tempered safety glass which has been painted on the outside edges for styling purposes.
Coors Ceramics Company produces flat rectangular ceramic substrates for technical applications. Presently, finished substrates are inspected by human inspectors for dimensional tolerance and for the absence of a variety of possible surface defects. In a two-phase effort, we developed a system that could measure part dimensional parameters and inspect for surface defects. Dimensional parameters include part width, length, edge straightness, and corner perpendicularity. Surface defects include surface contamination, blemishes, open cracks, edge chips, burrs, pits, dents, ridges, blisters, and hairline cracks. We employed highly parallel pipeline image processing hardware to achieve a throughput rate of 1 part every 2 seconds.
This paper is concerned with development of an automated and efficient system for quality control of coal. This is achieved by distinguishing between different major maceral groups present in the polished coal blocks when viewed under a microscope. Coal utilization processes can be significantly affected by the distribution of macerals in the feed coal. Manual petrographic analysis of coal requires a highly skilled operator and the results obtained can have a high degree of subjectivity. One way of overcoming these problems is to employ automated image analysis. The system described here consists of two stages: segmentation and interpretation. In the segmentation stage, the aim is to partition the images into different types of macerals. We have implemented a multi-scale segmentation technique in which the result of each process at a given resolution is used to adjust the other process at the next resolution. This approach combines a suitable statistical model for distribution of pixel values within each macerals group and a transition distribution from coarse to fine scale, based on a son-father relationship, which is defined between the nodes in adjacent levels. At each level, segmentation is performed by maximizing the a posteriori probability (MAP) which is achieved by a relaxation algorithm, similar to Besegs work. There are two major reasons for carrying out the segmentation estimation over a hierarchy of resolutions: to speed up the estimation process, and to incorporate large scale characteristics of each pixel. The speed can be further improved by restricting the operation on the pixels which are introduced as mixed in each resolution, by which the number of pixels to be considered are significantly reduced. In the interpretation stage, the coal macerals are identified according to the measurement information on the segmented region and domain knowledge. The paper describes the knowledge base used in this application in some detail. The system has been particularly successful in correctly classifying difficult cases, such as liptinite, vitrinite, semi-fusinite and pyrite.
UNION MINIERE (UM) is an international firm that has developed a special process, called preweathering, which gives rolled zinc sheets a natural patina look or a slate gray color. The strip of preweathered zinc can be affected by surface defects caused by the roll mill or by the preweathering process. We have equipped a production line of preweathered zinc in the plant with a computer vision based system in order to automatically inspect one side of the strip. The system is composed of a personal computer system, a light source, a line scan camera and an acquisition board. The basic purpose of this system is to provide effective on-line inspection. The main problems to be solved are, (1) that the imperfections show a considerable variation in length, from a few centimeters in the case of local defects, to several decimeters in the case of periodic spaced defects due to rollers, (2) that the contrast between the local defect and the strip is poor, (3) the grayness varies across the width of the strip, (4) that the grayness varies across the length of the roll. The real time inspection system has now been implemented and is currently undergoing evaluation in the plant.