An image registration approach for inspection of 2D electronic circuit patterns is described. The approach which coisists of an offline procedure and a runhime procedure has been validated on a prototype inspection system. The offline procedure selects and sorts registration features from CAD generated reference data according to a set of prespecified priority selection rules. Preference is given to features expected away from the center of the image since they represent potential distortions better. The size of windows searched during runtime to detect features is ohtamed from the maximum expected system errors and tolerances in part manufacture. To prevent spurious detection during runtime the ofiline procedure selects a feature if no other CAD feature intersects its window The runtime procedure detects edges and measures their image location to suhpixel accuracy within their respective search windows. Edges are detected by authenticating zero-crossings of a second-order differential operator applied to the profile of each search window. Registration is conducted on points composed by averaging the measured location of opposite polarity edges of the same object type and size. This reduces any bias introduced by the edge incasurement technique and prevents offsets that would otherwise be introduced by variations in circuit pattern dimensions. To minimize the likelihood of spurious edge detection (e. g. an edge detected on a defect) the dimension demarcated by the opposite polarity edge pair is monitored. After a significant number registration features have been detected the runtime procedure finds the parameters that transform pixels into CAD reference data coordinates and vice versa. Good results have been obtained in a prototype inspection system.
This paper discusses an automated visual inspection system for IC bonding wires that uses high-contrast image capture and an accurate bonding-ball measurement algorithm. On IC assembly lines visual inspection is vital to maintaining IC reliability. Wire bonding requires the automated evaluation of bonding quality to maintain productivity. Both bonding balls and wires must be inspected to evaluate bonding quality. We developed a bonding-ball measurement algorithm based on subpixel and morphological techniques and a wire inspection algorithm based on border following. The automated inspection system measures ball diameters to an accuracy within jim which corresponds to one-half pixel taking 0. 2 seconds to inspect a wire and ball. Paired with a wire bonder the inspection system configures a fully automatic bonding system.
We have developed a full-color inspection system that detects defects such as pin-holes and stains on surface of prepaid cards. The system consists ofa card conveyer unit with an automatic exchange mechanism and a custom designed image processing unit with a three-stage pipeline structure. Inspection performance ofO. 7 seconds/card with a defect detection resolution of 0. 1 5mm diameter is achieved with a new algorithm that reflects human visual criteria.
In this paper the design of a prototype system for real-time classification of wooden profiled boards is described. The presentation gives an overview of the algorithms and hardware developed to achieve classification in real-time at a data rate of 4Mpixel/sec. The system achieves its performance by a hierarchical processing strategy where the intensity information in the digital image is transformed into a symbolic description of small texture elements. Based on this symbolic representation a syntactic segmentation scheme is applied which produces a list of objects that are present on the board surface. The objects are described by feature vectors containing both numeric structural texture- and shape-related properties. A graph-like decision network is then used to recognize the various defects. The classification procedures were extensively tested for spruce boards on a large data set containing 500 boards taken from the production line at random. The overall rate of correct classification was 95 on this data set. The structure of these algorithms is reflected in the hardware design. We use a multiprocessor system where each processor is specialized to solve a specific task in the recognition hierarchy.
3-b vision c be employed in the inspection of geometric properties i. e. the shape and dimensions of industrial objects A method for the inspection of the 3-D shape of a c1as of industrial objects is presented in this paper. The method compares the CAD model of th bbject with information processed from a dense range map. Tentative test results are shown and their implications discussed. We suggest that preplanning the measurement and analysis stage and use of a programmable 3D sensor instead of a dumb camera-type sensor would give better performance.
This Paper introduces a self—adaptive and training concepts to improve the speed of slow visual inspection system to near one order of magnitude. In this scheme, the system is designed to work in three modes : Mode I (fast inspecti on) ,Mode I I (slow inspecti on) and Mode I I I . The first two modes automatical ly shi ft to each other depending on the condi tion of rol 1 er surface . Mode I I I is for automatic optimal binary threshold selection of defect image, in which a bifurcated searching algorithm is employed. At last, an algorithm for dot group pattern recognition of roller defects is discussed.
The concept of " universal array grammar" for off-line line drawing patterns is proposed and an algurithm for transforming two-dirrensional line drawing patterns to parsing sequences based on the " universal array grammar" is constructed.
The optical reading of labels encoding data in the form of a one dimensional barcode is a well developed technology. This paper presents a novel approach to the optical retrieval of a high volume of data contained in a new type of label. The labelling method uses the techniques of computer generated holography to encode the required two dimensional label data in the form of a digitally synthesised wavefront. This wavefront is optimally encoded using models based on optical holography and the calculated structure mechanically plotted into a new polymer based reflective substrate to form the label. The label is laser illuminated and the reflected wavefront optically reconstructed and decoded to remove the desired information.
In this paper a new procedure is presented which extracts two dimensional " time" series data containing maximum information about a closed boundary. The " time" series data is used for estimation of autoregressive model parameters. The extracted data makes autoregressive parameters to lie in closer space partitions. The use of two dimensional data overcomes the limitation of loss of phase information faced in one dimensional autoregressive models. A bivanate circular autoregressive model is used to represent the closed boundary data. The parameter extraction of the model is camed out by residual method which produces a stationary estimation. The model parameters are invariant to rotation translation scaling and choice of starting point on the boundary. The maximum information about the closed boundary and model parameters invariant to said transformations makes the procedure effective for inspection of planar objects.
This paper presents a new model-based method for estimating the object location. The estimation of the object location is formulated as a weighted least squared-error problem and its optimal solution is found using quaternions. The method can automatically de-emphasize the uncertain components of the vectors that represent positions of features in estimating the object location. This approach can locate objects more accurately than traditional approaches when feature positions have various inaccurate components. The experiments show that the estimation of the object location degenerates slower using the new method thafl using the traditional least squared-error approach.
The concepts of an image processing system characterized by a set of algorithmically dedicated functional units coupled by high bandwidth image buses are presented below. A chip set to perform 3x3 convolution operation is being designed as multistage pipeline and will be part of a functional unit for edge detection.
The article ut forword a new method of dynamic search based on hash tabIeand heuristic search method. The method can inQrove the speed of search oijeration when ful control knowledge about the solution space to objects is known. An example about using the search method to decode the Haffman code is discussed in detai I.
Not all bar code symbols are alike. This paper will discuss the characteristics of bar code symbols which can make or break a bar code laser scanner''s performance in a specific application. These characteristics can be broken down into three catergories: Substrates " inks" ( inks toners dyes etc. ) and the light source used to read the symbol. The characteristics of the substrate can be further separated into three groups: the medium used the scattering properties of the the medium and overlaminates. Intrinsic properties of the medium can include " paper noise" resulting from the grain of paper metal grain or a retro-reflective background. Scattering characteristics cover angular distribution of the scattered light absolute scatter levels from the substrate internal scatter and specularly reflected light. Overlaminates contribute their own assets and liabilities in successfully choosing a scanner that will perform for all your needs. The inks used and the light source utilized work in conjunction with each other in determining the performance of a laser scanner. The spectral characteristics and composition of the ink determine which light source the scanner must employ to be used to successfully interpret the symbol. The three common light sources available in laser bar code scanners are helium-neon lasers visible laser diodes and infra-red laser diodes. Experimental data will be presented illustrating the optical properties discussedIbove.
The information content and error tolerance analysis for several one dimensional bar codes is presented. A method which applies coding theory to design an bar code under a given error tolerance is presented.
This paper describes a new space-efficient family of thin bar code symbologies which are appropriate for representing small amounts of information. The proposed structure is 30 to 50 percent more compact than the narrowest existing bar code when 12 or fewer bits of information are to be encoded in each symbol. Potential applications for these symbologies include menus catalogs automated test and survey scoring and biological research such as the tracking of honey bees.
We describe a method for encoding information on stacked rows of bar codes. Each row uses one of three sets of codewords. When a single scan crosses data rows the difference in codewords can be used to organize the collected data correctly. The code provides for error detection and error correction both at the level of the codewords and at the whole message level.
A comparison of requirements and designs for barcode and non-impact printer scanners reveals similarities and differences that may be useful in leading to new solutions for barcode scanner problems. The non-impact printer scanner has been in volume production for over 10 years successfully achieving low cost high performance and high quality targets. Where requirements are found to overlap solutions already implemented and proven for printer applications may fmd further application in bar code scanners. Typical technologies used for printing include flying spot scanners liquid crystal shutters scophony scanners and LED arrays. Of primary concern in measuring figure of merit are such critical parameters as cost lifetime reliability conformance to regulatory standards environmental ruggedness power consumption compactness insensitivity to orientation acoustic noise produced modularity spot size depth of field exposure level and uniformity data rate scan length and uniformity and many more. A comparison of printing technologies their capabilities and their limitations with those used in barcode scanners may reveal common problems where we can take advantage of work already completed in similar application where requirements are found to overlap.
Laser based bar code scanners utilize large f/# beams to attain a large depth of focus. The intensity cross-section of the laser beam is generally not uniform but is frequently approximated by a Gaussian intensity profile. In the case of laser diodes the beam cross-section is a two dimensional distribution. It is well known that the focusing properties of large f/# Gaussian beams differ from the predictions of ray tracing techniques. Consequently analytic modeling of laser based bar code scanning systems requires techniques based on diffraction rather than on ray tracing in order to obtain agreement between theory and practice. The line spread function of the focused laser beam is generally the parameter of interest due to the one-dimensional nature of the bar code symbol. Some bar code scanners utilize an anamorphic optical system to produce a beam that that maintains an elliptical cross-section over an extended depth of focus. This elliptical beam shape is used to average over voids and other printing defects that occur in real world symbols. Since the scanner must operate over the maximum possible depth of field the beam emergent from the scanner must be analyzed in both its near field and far field regions in order to properly model the performance of the scanner.
Laser bar code scanners produce analog time varying signals which must be converted to logic level signals that can be decoded by a microprocessor. Proper decoding of the symbol is highly dependent on the circuit which converts the analog signal to a logic signal. The performance of a conversion circuit is partially determined by features of the detector signal due to the laser source the bar code symbol and ambient light. Characteristics which influence the selection of a digitizing circuit are presented.
Single Instruction Multiple Data stream (SIMD) parallel computers are employed in a number of application fields such as computer vision requiring large computing power and exhibiting massive parallelism. When the data dependencies among the processes which an algorithm has been split into are known " a priori" " ad hoc" interconnection topologies give good performance. On the contrary when the algorithm graph is not known " a priori" low cornmunication overhead can be achieved by packet switching communication. This paper describes how packet switching can be done in SIMD parallel computers
A Multiprocessor architecture is developed for imagetoimage algorithms which tunes processing power and efficient data distribution. The architecture is based on the MAXbus ROT video data distribution standard and processing elements containing four NEC Image Pipelined Processors each. A token ring architecture a messagepassing ring and a common bus architecture are analyzed in terms of efficiency and utilization where the latter appears to be most adequate. Special attention is paid to the stale data problem which occurs in iterative neighborhood operations.
Syntax analysis is the primary operation of a Syntactic Pattern Recognition (SPR) system. A real time SPR system would require efficient architectural supports for syntax analysis. The process of syntax analysis and the execution of a logic program are closely related. In this paper we propose a data-driven parallel architecture for syntax analysis based on the principle of parallel execution of logic programs. The proposed architecture is hybrid in the sense that its functional units unlike those itt traditional fine-grain datafiow model are coarse-grain macro operators capable of performing unification operations. The scheme for compiling the datafiow graphs eliminates the necessity of any operand matching unit in the data-driven architecture. All memory requests are tagged with register identification (similar to IBM 360/91) to provide an efficient hardware support for context switching. The experimental results indicate the proposed architecture is promising.
An artificial neural network (ANN) fed with optically generated features is applied to IC inspection. The data used are characters with defects in them that model those expected in IC patterns. The ANN is used in training to select the best features. This results the required number of neurons needed during defect testing. Simulation results are provided for four types of defects using optical Fourier Wedge-Ring (WR) sampled Fourier and Hough feature spaces.
The use of contextual information has been shown to improve the accuracy of text recognition. Methods for describing the graphical and textual contexts inherent in forms are presented. Graphical context derived from an empty form is used for improved segmentation. Since fields on forms often have a very limited number of acceptable responses (ie. sex marital status etc. ) context is also used to limit the scope of the classifier based on field type location or content. Fields on forms also exhibit known spatial interrelationships as well as contextual inter-dependencies which provide a natural set of mutual constraints for field-specific classifiers.
A new algorithm for handwritten numerals recognition is presented in this paper. It uses geometrical and topological features coded by a syntactic approach and a decision tree classifier to recognize unknown samples.
Most documents include various layout objects such as headlines text lines charts and tables. In particular tables are powerful tools that allow large quantities of data to be easily understood. An automated document entry system is needed that can recognize the document layout objects and extract the information from tables. In this paper an effective table recognition method is described. The proposed method is composed of three steps: (1) document layout structure recognition (2) table layout structure recognition (3) table content recognition. To develop the table layout structure recognition step we first examined the layout structure of tables in existing documents and classified several common structures. As a result of the examination we created ten rules and designed a ruled line and box extraction algorithm based on these rules. The effectiveness of the proposed method has been confirmed in experiments. Accordingly the proposed method will greatly contribute to the creation of an automated document entry system to allow faster document recognition and permit the data in tables to be extracted.
A new approach to the problem of handwritten signature verification is presented. This method exploits the regularity of length and curvature of a signature. Overall signature content at various angles is evaluated to form a slope histogram. Histograms are then passed to a classifier constructed from 10 valid signatures. Performance of the classifier on a data pool of 1000 valid and casually forged signatures is evaluated. In particular the equal error rate of this approach is shown to average 7 across 9 different subjects. Increases in the classifier error rates are noted when the forger is allowed some a priori knowledge of the target signature.
In this paper we focus on some features frequently employed in Optical Character Recognition (OCR) especially in algorithms based on contour analysis. These features are of a topological and/or geometric kind somehow describing the characters. The results of an experiment to examine the identification and separation power of some features are summarized.
STRACT The need for making information within paper documents available for computers increases steadily. In this paper we present a system which is capable to read and to simply understand address blocks of business letters. It is based on optical word recognition (OWR) techniques uses feature recognition methods based on word shapes and is largly independent from different fonts and sizes. Even uncertainly recognized words can be identified using a dictionary and a specific verification algorithm. Additionally recognition accuracy is improved considering different knowledge layers like address syntax and logical dictionaries. Keywords: Text recognition document layout classification text analysis pattern recognition intelligent interfaces
This paper describes a new efficient information-filing system for a large number of documents. The system is designed to recognize Japanese characters and make full-text searches across a document database. Key components of the system are a small fully-programmable parallel processor for both recognition and retrieval an image scanner for document input and a personal computer as the operator console. The processor is constructed by a bit-serial single instruction multiple data stream architecture (SIMD) and all components including the 256 processor elements and 11 MB of RAM are integrated on one board. The recognition process divides a document into text lines isolates each character extracts character pattern features and then identifies character categories. The entire process is performed by a single micro-program package down-loaded from the console. The recognition accuracy is more than 99. 0 for about 3 printed Japanese characters at a performance speed of more than 14 characters per second. The processor can also be made available for high speed information retrieval by changing the down-loaded microprogram package. The retrieval process can obtain sentences that include the same information as an inquiry text from the database previously created through character recognition. Retrieval performance is very fast with 20 million individual Japanese characters being examined each second when the database is stored in the processor''s IC memory. It was confirmed that a high performance but flexible and cost-effective document-information-processing system
Initial simulated optical correlation filter results to locate key words in destination address blocks on machine printed United States Postal Service (USPS) envelope mail are presented. The filters used are SDF MACE G-MACE and MINACE. These filters provide sharp correlation peaks allow controlled tolerance of intra-class (pitch font) variations and improved false class rejection (using symbolic filters).
A wide variety of optical distortion-invariant correlation filters for possible use in United States Postal Service (USPS) destination address block location and reading are summarized. These filters allow the entire envelope to be processed in parallel and are selected in a hierarchy using symbolic encoding to locate (and read) key words on an envelope. With contextual data they can locate the address block the lines and key words in each and read these words at high rates with excellent P (percentage correct recognition) and Pe (percentage error).
The quality of a manufactured part may depend upon the either the chemical integrity or relative quantity of son characteristic substance; either as part of the fmal product or a chemical used sonwhere in the process. Examples are traces of oxidation on a machined metal surface, cuthng fluid used in machining, water content in a food product or pignnt differences on paperboard. Often the effects are not noticed in process until it is too late or contaminants in a product are not reliably detected by off-line human visual inspection. Problems of this type are candidates for on-line spectral analysis. In this paper, spectral signature analysis is shown as a viable tool for many industrial problems. We present several real.world indusirial spectral inspection examples using a laboratory prototype spectrometer/statistical signature classifier.
Serial implementation of conventional object recognition techniques such as Hough Clustering and Interpretation Tree search perform poorly when faced with the combinatorial explosion of the search space especially in multiple-object scenes with partial occlusion. Parallelization of object recognition techniques therefore is an attractive proposition. The two issues that concern data parallelism are choice of multiprocessing granularity and choice of multiprocessing control. This paper shows a direct correspondence between the the choice of multiprocessing granularity and the granularity of representation of the image features and the object model features and also the direct correspondence between the choice of multiprocessing control and choice of constraint propagation technique. This paper cites two examples of parallelization of object recognition techniques - one based on Jjough Clustering and the other on Interpretation Tree search. Both examples are examined in the light of the two issues that pertain to data parallelism.
This paper reports a method to recognize partially occluded objects using the B-spline representation of the boundary. Curve sgements are represented using B-splines which are piecewise polynomial curves guided by a sequence of points. The B-spline control points found from the boundary points is then used to extract local features of the curve. A Hough transform like method is applied to normalize the two curve boundaries using extracted local features. The merit of a match is evaluated using the normalized B-spline control points. The ability of the technique to handle partial boundary information is also demonstrated.
Edge enhancement is typically the first processing step for computer vision applications. Edge enhancement can also be applied in bringing out the detail in a scene for human viewing. The Intensity Dependant Spatial Summation (IDS) vision model is a spatially variant algorithm which provides an increased level of edge enhancement. The 1DS model is implemented as a large window convolution whose coefficients are a function of the input pixel''s intensity. IDS produces an output which requires only a one-crossing detector to find the edges. Because IDS is spatially variant it has the ability to enhance edges in the dark areas of a scene as well as the bright areas which in turn produces a visually enhanced output image. Odetics has developed a very high speed non-linear large area convolver system the Adaptive Imager (AIS). The Adaptive Imager is able to produce a 16-bit IDS output image at RS-170 video rate (30 frames per second).