Machine Vision and the field of Artificial Intelligence are both new technologies which hive evolved mainly within the past decade with the growth of computers and microchips. And although research continues both have emerged from tF experimental state to industrial reality. Today''s machine vision systEns are solving thousands of manufacturing problems in various industries and the impact of Artificial Intelligence and more specifically the ue of " Expert Systems" in industry is also being realized. This pape will examine how the two technologies can cross paths and how an E7ert System can become an important part of an overall machine vision solution. An actual example of a development of an Expert System that helps solve machine vision lighting and optics problems will be discussed. The lighting and optics xpert System was developed to assist the end user to configure the " Front End" of a vision system to help solve the overall machine vision problem more effectively since lack of attention to lighting and optics has caused many failures of this technology. Other areas of machine vision technology where Expert Systems could apply will also be ciscussed.
While moire interferometry has been recognized as a promising technique for 3-D automated inspection it has not been widely used because designing a moire interferometer for a given inspection task can be difficult. Here mathematical models of the projection moire process are developed which permit the design of inspection systems. The resulting equations have been combined with numerical optimization techniques to yield software which optimizes a system for a given task. An application of the technique with experimental verification is presented.
We have developed a visual inspection machine for solder int defects of SMDS (Surface Mount Devices) mounted on PCBs (Printed Circuit Boards). The change of the intensity of the reflected light obtained by illuminating the soldered surface from different incident angles depends on the gradient of the soldered surface. We generate an images that represents this change of intensity and analyze this image to inspect the solder joint defects. In addition we report on an automatic generation program of the NC data for inspection.
There is need in the aircraft component industries for rigorous testing of elastomeric seals for aircraft application. Some of the seals are usually situated inside the jacks where it is difficult to test the seals in thei r actual operating environment. This paper will describe some aspects of a fibre optics and CCD camera test method which allows seals to be visually monitored in the jacks while testing of other seal parameters are in progress. The experimental results showed the deformed images which can be digitized to provide controlled and automatic test procedure.
We analyze the characteristics of a synthetic sensor comparable with respect to field width and resolution to the primate visual system. We estimate that 150 pixels are sufficient using a logarithmic sensor geometry and demonstrate that this calculation is consistent with known characteristics of biological vision e. g. the number of fibers in the optic nerve. To obtain the field width and resolution of the primate eye with a uniform sensor requires between iOiO'' times the number of pixels estimated for the comparable log sensor. Another interesting observation is that the field width and resolution of a conventional 512x512 sensor can be obtained with around 5000 pixels using the log geometry. We conclude with consideration of the prospects for achieving human-like performance with contemporary VLSI technology and briefly discuss progress on space-variant VLSI sensor design.
Neural networks are applied to the computer vision recognition problem using a hierarchical approach. The object is categorized and then decomposed using a pyramid structure that proceeds from category to component level in successive steps. The method allows an improved recognition procedure by utilizing the global recognition capabilities of neural networks for separate visual segments. The pyramid structure creates a top down approach for the recognition process.
Concurrent processing is one approach to high speed image analysis. Transputer systems are flexible tools for parallel processing. Based on examples of the Fast Hartley Transform convolution and contour tracking the suitability of commercial transputer systems for industrial image processing applications is demonstrated. The main emphasis is put on the interfacing to the frame grabber and different potential hard and software configurations. Advantages restrictions and useful applications of current transputer systems for image analysis are discussed on hard and software level. Experiences and realized speedup factors will be presented. THE TRANSPUTER A typical member of the transputer product familiy is a single chip containing processor memory and communication links which provide point to point connection between transputers. A transputer can be used in a single processor system or in networks to build high performance concurrent systems. A network of transputers is easily constructed using pointtopoint communication. This has many advantages over multiprocessor buses. Transputers can be programmed in high level languages. To gain most benefit from the transputer architecture the whole system should be programmed in OCCAM a high level language which supports parallel processing. The features of the IMS T800 transputer ( see figure 1) are in detail: 32 bit architecture 33 ns internal cycle time 30 MIPS (peak) instruction rate 4 Mflops (peak) instruction rate 64 bit onchip floating point unit 4 Kbytes onchip static RAM 120 Mbytes/sec sustained rate to internal memory 4 Gbytes directly addressable external memory 40 Mbytes/sec sustained data rate to external memory Four serial links 5/10/20 Mbits/sec Bidirectional data rate of 2. 4 Mbytes/sec per link 76 / SP/E Vol. 1386 Machine Vision Systems Integration in lndustry(199
The two most common approaches to image processing today are software-based systems which are flexible but very slow unless run on very expensive computer hardware and systems using special purpose hardware which can be very fast but are typically very inflexible and fairly expensive. This paper presents an intermediate approach: The use of inexpensive electronically-programmable logic devices (EPLDs) in appropriate architectures to do a wide range of image processing operations thereby providing both the speed of hardware-based systems and most of the flexibility of softwarebased systems. Since EPLDs can be reprogrammed quickly (from associated ROMs) a sequence of operations can be performed in near-real-time by loading a sequence of EPLD configuration files one after another. This paper illustrates these ideas by showing the internal programming needed for many common image processing operations as well as appropriate system architectures.
Traditional Cartesian coordinates are intrinsic to most CCD imagers video displays and image processing equipment. While Cartesian coordinates are efficient for image translation they are poorly suited for rotation zoom and perspective image transformations which regularly occur in robot vision. This paper describes a hardware system which remaps incoming video to user selectable coordinates at video frame rates. The system reroutes incoming (x data accumulating pixel contents into the new locations. Imagery is thereby transformed into coordinates which can significantly simplify subsequent geometric image transformations. The system is fabricated on a board which plugs in to an IBM PC/AT backplane for immediate application.
An efficient computation of 3D workspaces for redundant manipulators is based on a " hybrid" a!- gorithm between direct kinematics and screw theory. Direct kinematics enjoys low computational cost but needs edge detection algorithms when workspace boundaries are needed. Screw theory has exponential computational cost per workspace point but does not need edge detection. Screw theory allows computing workspace points in prespecified directions while direct kinematics does not. Applications of the algorithm are discussed.
An algorithm for fmding characters from complicated document image is presented. We assume that all characters appear in strings which contain at least two characters. Characters overlapped by illustration characters touched by illustration or other characters and characters divided into fragments can be found. In the first step initial character probabilities are attached to pixels. In the second step they are updated by neighboring character probabilities by relaxation method. This algorithm is shown to be faster than the simple template matching algorithm for the typical document. An application to the system that automatically makes the data for electronic catalogue of automotive parts from immage is introduced.
In order to construct a machine-vision system which is robust in the face of variations in image lighting arrangements of objects viewing parameters etc. it is helpful to model the vision problem as a state-space search problem. The state-space search procedure dynamically determines an optimal sequence of image-processing operators to classify an image or to put its parts into correspondence with a model or set of models. The optimal goal state is the one with the least information distortion. The critical problem in this approach is how to compute information distortion. Details about the design of cost functions in terms of information distortions are described. A vision system VISTAS has been constructed under the state-space search model. The principles in constructing the system are presented.
A digiuil image processing inspection system is under development at Oak Ridge National Laboratory that will locate image features on printed material and measure distances between them to accuracies of 0. 001 in. An algorithm has been developed for this system that can locate unique image features to subpixel accuracies. It is based on a least-squares fit of a paraboloid function to the surface generated by correlating a reference image feature against a test image search area. Normalizing the correlation surface makes the algorithm robust in the presence of illumination variations and local flaws. Subpixel accuracies of better than 1/16 of a pixel have been achieved using a variety of different reference image features.
The article describes an extension to the Al language Prolog and its implementation on an image processing workstation. The resulting system allows the integration of Al programming, image processing, expert systems and robotic/device control in a user friendly environment. The language itself is capable of driving a range of image processing devices concurrently. The system architecture also permits processing of information from a number of sensing devices, all of which are controlled via Prolog in the host computer.
Although Automated Visual Inspection has been applied in many areas, the food industry has so far been unable to take full advantage of this technology due to the non-deterministic nature of product specifications. This paper discusses some of the problems faced by food industries including foreign body detection, shape and decoration analysis. Possible solutions using different visual sensing methods such as UV illumination, X-ray, Gamma ray and infra-red are assessed. The paper also discusses the application of an intelligent vision system, based on Prolog, for the inspection of complex products such as foodstuff and plants.
The paper will provide an overview of the challenges facing a user of automated visual imaging (" AVI" ) machines and the philosophies that should be employed in designing them. As manufacturing tools and equipment become more sophisticated it is increasingly difficult to maintain an efficient interaction between the operator and machine. The typical user of an AVI machine in a production environment is technically unsophisticated. Also operator and machine ergonomics are often a neglected or poorly addressed part of an efficient manufacturing process. This paper presents a number of man-machine interface design techniques and philosophies that effectively solve these problems.
In a flexible assembly cell a vision system for part identification is required to be both fast and robust. The first requirement is met by using a verification system that generates hypotheses based on outside information and common object positions and accepts those if sufficient support can be found. However this system fails for all exceptional cases. These require a more genera! system. For this purpose a recognition system is used which tries to find the best matching mode! from the set of all possible models. The appropriate switching from verification to recognition is done by a control procedure which evaluates the following critena: . First the error rate of the verification system is minimised rejected cases by the verifIcation system lead to an activation of the recognition system. The error rate of recognition is related to the performance of the rest of the robot cell. S Second the execution time is optimised. This means that if the expected execution time of verification will exceed that of recognition recognition should be activated. This decision can be made both in advance and during verification. In general however verification should be faster. Verification of hypotheses is done in coarse-to-fine processing. The coarse step is done on information from an overlooking camera and uses simple binary images for speed purposes. The resulting hypothesis consists of an identity and 3D position and orientation information. Further
The flat plate project is a pilot study for the creation of intelligent robotic systems. In these systems vision robotics and artificial intelligence aspects have to be combined. The final goal is to give a robot the capacity to learn to solve the problem represented by a toy for a two year old child. This toy is called the ''Holle Bol'' in Dutch. It consists of a plastic ball with differently shaped holes in it and a number of small blocks that must be put into the corresponding holes. To investigate the problems associated with this project first a simplification of the problem has been studied. In this case a flat plate with differently shaped holes is used. The paper describes the results of the flat plate pilot project.
An approach of syntactic recognition (SR) using sampled boundary distances (SBD) is studied. The SBD is an ordered collection of samples of distance defined from a major axis to points located on the boundary of the object image. With the SRSBD approach the object undergoes many affine and non-affine transformations can be recognized. The affine transformations include translation rotation scaling and stretching (along and/or perpendicular to the major axis). The non-affine transformations include i) additive transformation applied to the distance of all the boundary points (perpendicularly) from the major axis ii) additive transformation applied to the SBD only and iii) random transformation of all the boundary points except the points used to measure the sampled boundary distances provided that the major axis is unchanged. Therefore the SRSBD can be used to recognize an object at various locations orientations and distances from the camera and various objects of the same family. The conversion of the SBD into an invariant string representation is developed. The use of this Earley''s parsing algorithm for recognition of the string representation of SBD is presented. The use of SRSBD to recognize partially obscured object or object family and to detect circularity of a partially obscured circle is presented. The experimental results are presented. With the SRSBD the following problems can be avoided the primitive selection problem the starting point selection problem and the problem of the noise-sensitive
Industrial applications of vision systems for inspection very often require high speed performance. Using an application accelerator in combination with a PCbased vision system a speedup factor of about 10 15 can be realized. Algorithms using the instruction set of the array processor lead to high inspection rates and reliability. Contour filtering and gradient processing therewith are performed in short time. We will discuss the given inspection task the algorithms used and the integration of the application accelerator.
In todays Semiconductor Manufacturing Industry it is becoming more difficult to utilize the human operator as the visual process control vehicle so as to cost-effectively achieve statistical process defect goals. Machine vision continues to make inroads cost-effectively improving manufacturing visual inspection requirements driving and sustaining desired process control capabilities. This paper will attempt to survey the attributes of automatic visual process monitoring within the Motorola Semiconductor Assembly Manufacturing environment. Such factors as defect control objectives inspection requirements vision engine platform application engineering and long-term application maintenance all of which contribute to the costeffectiveness of the automatic visual inspection task.
This paper describes a fluorescent lamp inspection system designed for a major North American manufacturer. Operating 24 hours a day it inspects up to 16 lamp bases per hour for nine common manufacturing defects using parallel processing hardware and morphology algorithms. Hardware and software techniques used to build a robust and reliable system relatively insensitive to variations in lighting conditions and lamp appearance are described.
This paper addresses the problems of automatically inspecting gross features of machined parts using three-dimensional depth data provided from a stereo vision system. The inspection strategies described are mainly concerned with verifying geometric tolerances to typical engineering requirements. That is to say: verifying the presence and the dimensions of features and measuring feature relationships. This paper discusses how depth data may be processed to produce relevant features and also how geometric models of the parts may store tolerance information and be interrogated to perform inspection automatically with the described vision system. Results are presented using real data provided by the vision system.
The handwritten signature is the most widely employed source of secure identification in the United States especially for cashing checks and verifying credit card transactions. Currently all signature verification is based on visual inspection by a teller or a store clerk. Previous successful techniques for forgery detection have primarily been on-line techniques. This paper describes an off-line technique that can spot forgeries with a accuracy of 94. 44 while accepting 92. 5 of genuine signatures.
A highly reliable personal verification system using fingerprint images has been developed. Various image enhancement techniques including a directional spatial filter and local thresholding for each point are applied to improve the quality of noisy and unstable images and to extract the feature data as accurately as possible. This process is performed by our special hardware for rapid image processing operations. In the matching process we propose a new algorithm combining the coarse matching of ridge direction data with the precise matching of minutia data to achieve registration efficiently. This system provides high reliability and a real-time response at reasonable cost.
I t'' I S art I (. . I f i [: j iJ(F Of (AD/(AM ystrn Li I f I y tti 1 I E A I at. :)I) I I Cat I (:)fl ()f the sysLeni I nte? I C)fl . The sys tern f''ornlE a (. . ()rres1)ond i i''g I ast sty I e autc)rnat. c. a I I y that. userE I I ke DY I ntcrac I VE )r i C i D I C fuzzy and k:ricw I ece b((:tI (1(JE . F I na I I y t s at t I (i I e I rr()(_1kJ(:. :e tE(I1)() I C)gy i ieta
Cost effective development of machine vision applications requires the availability of application development tools which provide a consistent interface for the application engineer. To be effective the tools nust be consistent across multiple platforms where the platforms can vary with respect to the image processing hardware the host computer and the operating system. This paper describes a programming system which addresses these requirements. The independence from specific image processing platforms is achieved with application development tools which utilize an interface through which all references made to the system are symbolic. By its nature the symbolic referencing mechanism prohibits the use of any hardware specific references thereby maintaining hardware independence.
The Canadian contribution to the International Space Station Freedom is the Mobile Servicing System (MSS) which will consist of numerous robotic elements that will support the assembly maintenance and servicing of the Space Station. An important function of the MSS'' vision system is object identification. Some preliminary research has been carried out to identify objects using 2D and 3D shape features and an alternative to these complex approaches is suggested by utilizing Optical Character Recognition (OCR) and/or Bar Code Technology (BCT) in labelled object recognition. This object identification scheme will assist the MSS in performing a wide variety of tasks such as automatic payload capturing berthing world map construction and collision avoidance. The essential need for fiducial marking of various parts of the MSS as well as payloads is established for the purpose of reliable object identification and verification. The approach is considered feasible because the Space Station environment will be highly structured with suitable markings (such as bar codes) to facilitate the recognition of various parts and end effector tools. The identification and verification of payloads however will come prior to the execution of an automated operational scenarios. A system based on the recognition of a symbology label consisting of OCR BCT and four circular targets was recommended and the design of the symbology label is discussed.
Integrated machine vision software that combines the ease-of-use of a menu-driven interface with the power of a programming environment enables engineers scientists doctors and other laboratory professionals to prototype and implement full-featured image processing and analysis systems with a minimum investment in time and money. A menu-driven interface permits rapid prototyping with a minimum investment in either machine vision or computer training adding a programming environment permits the flexibility to adapt when necessary. In addition such software runs on a standard hardware platform provides tools for analyzing images both interactively and automatically and offers data gathering and accessibility. New Approach to Machine Vision Application Development Application implementation of machine vision has finally come of age. No longer is it necessary for each application to require a highly trained vision expert to develop and install machine vision and imaging systems. Also gone are the months of software development for implementing even the simplest of applications. All this has been changed by the advent of standard computer machine vision platforms and easy-to-use yet fully functional integrated software tools which allow engineers scientists doctors and other professionals to define prototype and implement complete applications. This development of easy-to-use software tools running on standard platform hardware benefits engineering medical and other scientific professionals in two ways. First standard platform hardware provides them with a familiar environment and a multitude of available peripheral hardware to simplify