Quality control through real-time process improvement and on-line inspection has become a key element in establishing efficient production lines. Machine vision solutions to these quality control issues are becoming increasingly popular as machine vision systems become more cost effective. This paper discusses how a machine vision system is incorporated into a paper product production line to provide 100 percent inspection and real-time, process control feedback. Seven dimensional measurements are performed on an 8.5 by 11 inch, 3 ring binder divider sheet, to an accuracy within 0.002 inches. The vision system hardware engine is comprised of a standard, multimedia personal computer that is enhanced with two frame grabber boards and Sherlock software from Imaging Technology. Two progressive scan CCD cameras are used to provide high speed image acquisition and adequate image resolution. Using Microsoft Visual Basic, the operator is provided with a single, graphical user interface with real-time graphing of the measurement data. Visual Basic runs in the foreground, controlling the user interface, while Sherlock operates in the background, performing all the machine vision tasks. By viewing the real-time graphs generated in Visual Basic, the operator is able to make process improvements, in real time, at line sped sin excess of 200 sheets per minute.
By utilizing images calculated on-the-fly as a filter improvements in real-time performance of object measurement and feature extraction can be achieved for automated aerial photograph analysis. The process requires the rapid calculation of images from an existing terrain database. The calculated images are then compared to incoming sensor data. The difference between the calculated and sensor image is then utilized as a parallel error signal for updating the state of knowledge of the objects and features measured. The advantage of this image feedback technique is that the calculation of sensor realistic perspective views from parameterized object models is easier than the direct interpretation of complex images. The feedback technique effectively eliminates what is already known from the measurement signal and thereby reduces the amount of data which must be processed by pattern recognition techniques by orders of magnitude. The paper presents the mathematical description of the image feedback technique and estimates update frame rates which can be expected for real time applications. We then discuss the incremental software development approach and the system design we are using for implementing the technique. The state of the current system is presented along with a discussion of experiments and experiences gained in building large-scale high-resolution terrain databases. The paper concludes by defining future research areas that need to be addressed for improving performance and accuracy.
In this paper, we describe a human face recognition system, which is based on an incoherent optical correlator. A liquid crystal display (LCD) panel is used as the real-time spatial light modulators. A set of eigenfaces, which was extracted from 200 training images, is used as image filters in the reference plane of the correlator. Since the face images can be approximated by different linear combinations of a relatively few eigenfaces corresponding to large eigenvalues, they can be efficiently distinguished from one another by a small set of the weight coefficients, which is derived by projecting the input image onto every selected eigenface. Recognition can be performed by a simple minimum distance decision rule. We use the optical correlator as the feature extractor and the optical correlation results between the input image and the eigenfaces as the features. By using the optical correlations operation instead of the projection operation, much more features can be parallel. By using the eigenfaces as image filters instead of the original images in the training set, the numbers of optical correlation operation can be greatly reduced compared to the original numbers of the training set.
Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.
Mathematical morphology has proven to be useful for solving a variety of image processing problems and plays a key role in certain time-critical machine vision applications. The large computation requirement for morphology poses a challenge for microprocessors to support in real time, and often hardwired solutions such as ASICs and EPLDs have been necessary. This paper present a method to implement binary and gray-scale morphology algorithm sufficiently on programmable VLIW mediaprocessors. Efficiency is gained by (1) mapping the algorithms to the mediaprocessor's parallel processing units, (2) avoiding redundant computations by converting the structuring element into a unique lookup table, and (3) minimizing the I/O overhead by using an on- chip programmable DMA controller. Using our approach, 'C' implementation of gray-scale dilation takes 7.0 ms and binary dilation takes 1.2 ms on a 200 MHz MAP1000 mediaprocessor, and more than 35 times faster than that reported for general-purpose microprocessors. With comparable performance to ASIC implementations and the flexibility of a programmable processor, this real-time image computing with mediaprocessors will be more widely used in machine vision and other imaging applications in the future.
The 'a trous' algorithm represents a discrete approach to the classical continuous wavelet transform. Similar to the fast wavelet transform the input signal is analyzed by using the coefficients of a properly chosen low-pass filter, but in contradistinction to the latter there follows no concluding decimation step. Examples of practical applications can be found in the field of cosmology for studying the formation of large scale structures of the Universe. In this paper we develop parallel algorithms on different MIMD architectures for the 2D 'a trous' decomposition. We implement the algorithm on several distributed memory architectures using the PVM paradigm and on a SGI POWERChallenge using a parallel version of the C programming language. Finally we investigate experimental results obtained on both of them.
Cellular automata have long been known for their self- organizational properties. In contrast to physical system, even starting with complete randomness, a cellular automata system can eventually reach an organized state. Recently cellular automata have been applied in image analysis to identify damage in large structures. In this paper it is shown how cellular automata can be implemented in parallel and used to analyze and identify flaws and defects.
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
Multi-scale, multi-resolution image decompositions are efficacious for real-time target tracking applications. In these real-time systems, objects are initially located using coarse descriptions of the original image. These coarse scale results then guide and refine further inspection, with queries of higher resolution image representations restricted to regions of potential objects occurrence. The result is the classical coarse-to-fine search. In this paper, we describe a method for generating an adaptive template within the coarse-to-fine framework. Causality properties between image representations are directly exploited and lead to a template mechanism that is resilient to noise and occlusion. With minimal computational requirements, the method is well suited for real-time application.
Proc. SPIE 3645, Real-time motion detection for target acquisition "on the move" based on a nonlinear filter using short-time and medium-time image differences, 0000 (26 March 1999); https://doi.org/10.1117/12.343793
Motion is one of the most important cues for the acquisition of targets. Given the real time requirement, there are two basic approaches for the detection of motion: (1) the difference between the actual image and an adaptive background image and (2) the difference between the actual image and preceding images. The first approach provides a precise and robust segmentation of the moving targets from the background and works well with low target/background contrast and clutter. However, when the sight is moving the background changes too quickly for a robust calculation of a valid background image. The second approach adapts well to the actual environment, but provides only an inaccurate indication of the moving edges of the targets and is quite sensitive to target/background contrast and clutter. This work presents a new approach which combines in a nonlinear manner the short time and the medium time image differences between the actual image and preceding images. It provides a precise and robust segmentation of the moving targets from the background and a good adaptation to the actual environment. In addition, it works even better than the first approach for low target/background contrast and clutter. We present the details of this approach including its real time implementation and its role in our Automatic Target Detection and Tracking System.
Existing techniques have been limited in terms of throughput and classification capabilities. This paper presents a radically new approach of handling the fastest of steel mill line speeds, while surface imaging is done using standard video cameras and defect analysis is done entirely in software on mass market PCs. This analysis involves the use of consumer electronics technology such as video cameras and multimedia PCs. Advances in multi-media PC technology, such as MMX, have made it possible to do all image processing in software versus the use of dedicated hardware. Among the topics to be presented are, the system architecture made up of arrays of inexpensive cameras and the corresponding processing units. Innovative illumination concepts guarantee outstanding image quality at any line speed. Intelligent pre-processing has been introduced to compensate various environmental effects. Sophisticated defect analysis provides highest quality data for the final classification step. As a result, steel surface inspection systems can be built today as software solutions - rather than a configuration of dedicated hardware.