Prototyping of frontend lighting speeds its optimization in machine vision system development. An illuminator is presented for evaluating lighting which permits quick and precise control of light source azimuth and elevation and optionally size and polarization. Object orientation is independently controllable. The illuminator permits interactive evaluation of lighting effects upon the image. Examples illustrate the illumination of threedimensional objects and the effects of source size.
A new positioning method using an optical pick-up sensor is demonstrated for an automated subscriber wire distributing frame at the front-end of a telephone switching station called Automated Main Distributing Frame. The wiring operation has been replaced by automated pin insertion into a matrix board which has recently been developed from printed circuit board technology. This positioning method achieves two-dimensional positioning accuracy of 15 im on dynamic strokes across an area of a few meters square. Optical disk pick-up and printed circuit board technologies are employed to achieve such accuracy on a wide stroke. A sensor employing two polarized lights and three position-sensitive devices can detect three-dimensional displacement and two-dimensional inclination from the target guide pattern on the matrix board. We applied this new positioning method to an automated main distributing frame prototype system. Positioning experimerits performed using a cylindrical coordinate motion mechanism with a stroke of 1 . 8 m have obtained two-dimensional p0- sitioning accuracy within 1 5pm. The results confirm the feasibility of highly accurate positioning and insertion of a small pin into a closely spaced crosspoint hole in a matrix board. This will lead to precise and reliable positioning for an automated main distributing frame in an automated subscriber wiring operation in a telephone switching station.
There is a serious need for a practical system to evaluate the nighttime visibility of existing traffic signs and provide data for making decisions on sign replacement. A mobile system has been developed which can measure the average retroreflectance of sign legend and background from a moving vehicle during daylight hours. This system uses a video camera to acquire sign images a xenon flash as a source of light a personal computer to analyze the sign images and a laser rangefinder to measure the distance to the sign.
LIDS Laser Image Detection System has been developed in New Zealand at the DSIR as a tool to investigate typical industry problems. Laser based imaging has several demonstrated advantages over conventional camera systems these include independence of ambient lighting conditions translation zoom and resolution control direct measurement of range polarisation and colour selectivity. Limitations of laser imaging such as operating range speed and safety are reviewed and solutions outlined. The integration of laser imaging with machine vision can result in a reduction in the processing required for some image processing applications and allows the vision system more control and versatility over the input image data. SPIE Vol. 1385 Optics illumination and Image Sensing for Machine Vision V (1990) / 27
Vision is one of the most powerful forms of non-contact sensory feedback for monitoring and control of manufacturing processes such as welding. Machine vision applications in welding have included the off-line determination of the locations of the workpieces to be welded (typically referred to as part-finding); the in-process correction of robot paths to compensate for fixturing inaccuracies, part tolerances, or weld distortions during welding (seam-tracking); the real-time sensing of weld joint and pool shape and geometry for welding process control; the automated inspection of the weld joint and bead surface shape [1,2]. However, welding poses particularly challenging problems to conventional optical sensing techniques. One of the major problems is the presence of the welding arc which is not limited to a single spectral region and thus cannot be easily filtered out optically. A novel vision sensing technique has been developed and is used to overcome the extreme variation in scene brightness created by the welding arc. The system incorporates intense pulsed laser illumination and synchronized shuttered image sensing to overpower the arc light and electronically produce a video image virtually free of arc glare. In this paper, we present an effort for the development of integrated monitoring and analysis techniques which combine the above mentioned laser video sensing techniques with extensive vision processing schemes and simultaneous monitoring and analysis of the arc signals and other process parameters. The comprehensive vision processing techniques are used for image enhancement, detection of important features, and calculation of relevant dimensional measurements. This information allows more effective monitoring by a human operator and better record keeping. It also provides reliable sensory feedback for realtime process control in robotic applications. This research effort is mainly sponsored by a Department of Energy (DoE) Small Business Innovation Research program for the development of novel integrated vision monitoring and analysis systems to be used in the fabrication, maintenance, and repair of nuclear reactor components. The developed techniques are also applicable to other critical robotic welding applications in the defense, aerospace, and other industries. The vision sensing techniques discussed in this paper have also been used in other applications where high luminosity of a combustion flame, an explosive event, or some form of plasma is present.
This paper is concerned with camera calibration by observing four non-coplanar static points known in space. The solution is based on the exploitation of the principle of distance invariance in deriving the required equations to estimate the position of these four points in camera coordinate system. Then the required transformation between the base coordinate system and camera coordinate system is computed.
This paper presents a semi—automatic method to calibrate a three—camera stereovision set-up accurately and simply. The algorithm handles both initial calibration and on—line self—calibration and assumes the availability of a reliable set of matched image points. No 3—D world coordinate control points are available to confirm the validity of the self—calibration. A new general algorithm is introduced based on the minimum distance measure which provides the necessary criterion to make self— calibration possible. Convergence is obtained for small changes in camera parameter by optimization of the minimum distances over the 22 parametric space. Ill—conditioning problem is solved by a partitioning method. Comparisons with SYD approach are made. Synthetic tests were performed and the method proposed gave satisfactory results.
A mathematical model for a typical CCD camera system used in machine vision applications is presented. This model is useful in research and development of machine vision systems and in the computer simulation of camera systems. The model has been developed with the intention of using it to investigate algorithms for recovering depth from image blur. However the model is general and can be used to address other problems in machine vision. The model is based on a precise definition of input to the camera system. This definition decouple3 the photometric properties of a scene from the geometric properties of the scene in the input to the camera system. An ordered sequence of about 20 operations are defined which transform the camera system''s input to its output i. e. digital image data. Each operation in the sequence usually defines the effect of one component of the camera system on the input. This model underscores the complexity of the actual imaging process which is routinely underestimated and oversimplified in machine vision research.
The authors describe a correlation based vision system called MBVS (Model Based Vision System) using 3D-CAD surface models generated with the help of software dedicated to image synthesis. The a priori knowledge defines a synthetic representation of a scene called virtual world. Connected by an analog RS-170 link the graphic workstation is considered a camera by the vision system performing the correlation. MBVS shows that computer graphics and computer vision can greatly benefit from each other. I .
Object structure is one of the most important features for many imaging applications. In many applications in space recording the spatial structure adequately is a challenge due to the the wide range of illumination conditions encountered. Moreover communication constraints often limit the amount of data that can be transmitted. Motivated by these concerns we have developed a coding scheme which is robust to the variations in illumination conditions preserves high structural fidelity and provides high compression ratios. The high correlation between the original and decoded images demonstrates the potential of this coding scheme for machine vision applications.
We present a general methodology for designing experiments to quantitatively characterize lowlevel computer vision algorithms. The methodology can be applied to any vision problem that can be posed as a detection task. It provides a convenient framework to measure the sensitivity of an algorithm to various factors that affect the performance. The methodology is illustrated by applying it to a line detection algorithm consisting of the second directional derivative edge detector followed by a Hough transform. In particular we measure the selectivity of the algorithm in the presence of an interfering oriented grating and additive Gaussian noise. The final result is a measure of the detectors'' performance as a function of the orientation of the interfering grating.
In this paper formulation and graphical representation for excess semiconductor panel due to moving laser light source as a function of city position the laser beamwidth and the panel width are studied. semiconductor panel can be used as a laser sensor and millimeter wave
Signal processing systems used for color measurement laser range finding or background subtraction often compute the value of a parameter of interest by division of signals obtained from multiple sensor channels. Although the noise statistics of a single channel may often be accurately modeled as a linear transformation of a Gaussian random process the computation of a ratio constitutes a nonlinear estimation problem which may be particularly difficult to analyze in closed form. This paper demonstrates the use of a computational based model for estimating the output distribution and statistics of a ratiometric processing system. Examples will show the error performance of the signal processor with decreasing input signal to noise ratio the overall result being a bias in the estimate in addition to the expected increase in the sample variance. Extension of the analysis approach to other nonlinear systems will also be discussed.
A divergentlaserbeam was used to record holograms ofmoving iron fibers with submicron diameter (0. 5microns) and high velocity (125 meters per second). Analysis basedon geometric optics is used to derive the magnification factors forrecording and reconstruction. Experimentairesults consisting ofsingle-shot holograms of fast moving fibers are presented. This technique can be used for dynamic measurement of particle size and fiber orientation distribution. 1 .
Both thereconstructed imagebandwidth which islimited by the finite sizeofthehologram and twin image artifactcaused by the lack ofphase information restrict the achievable reconstruction resolution using in-line holography. In this paper an new iterative error energy reduction algorithm is used to address these problems. This algorithm suggests the possibility of superresolution in in-line holography in which the conventional diffraction limit may be approached or even exceeded. Superresolution has been demonstrated for several other types of coherent and incoherent imaging but not yet for holography. 1 .
Optical morphological correlators are considered for shading and illumination problems that arise in robotics and product inspection and for contrast problems that arise in infrared (IR) imagery for automatic target recognition (ATR).
The applications and development of hybrid image processing have attracted significant attention in recent. This paper describes a multifunction optoelectronic hybrid processor that can inplemente several operations . This new system is appropriate for real-time automatic pattern recognition. An effective approach and architecture are provided for the robotic vision. In the system utilizing the joint transform techniques and the resulting Fourier transform of the object image and reference image was detected by the CCD camera and then sent it into digital image preprocessor . At the same time Fourier spectrum of the edge-enhanced images are obtained pass coherent optical image processing through a liquid crystal spatial light modulator as a real-time interface device ( a incoherent-to-coherent irrge flCC ) . Thus classification and correlation of the object pattern are carried out by using both of digital and analogy inge processing. The preliminary experimental results are given. 1 .
This paper presents a formulation of the reconstruction of displacementfield by carrier holography. We develop a method for determining the displacement field inside an object while it is loaded or heated based on the principle of continuity and the theory of backprojection. In the case of non-penetrating material the projection data of derivative of the displacement components are obtained from frank projections of entire deformable body. The reconstruction algorithm adopts the convolution and FFT techniques in Radon inversion formula. A cylindrical specimen which is subjected to a concentrated compression on the top is discussed as the test object. The reconstruction result i s compared with analytical solution. 1 .
In this paper we present a triangulation based range finding system cross-stripe structured light system (CSSLS) composed of two stripe projectors and a camera generating a cross stripe. We first show a method ofreconstructing surfaces from the range data obtained by CSSLS. The technique utilizes Coon''s patch formalism that interpolates the surface solely from the boundary information of the patch. We also show how to extract homogeneous surfaces using CSSLS. Simulation and experimental results of our method are presented. 1 .
A commonly used approach to extract depth information is by measuring the quality of focus in the image plan. When applying this approach to obtain a full field depth map from one image only a sparse set of data can be obtained. That is depth information can not be obtained using information from just a single pixel. The information must be extracted from a group of pixels (e. g. using sharpness or size of objects in the image). This sparse data limitation can be eliminated by combining depth from focus with chromatic aberration. This concept color encoded depth will be discussed with examples of various possible configurations. Color encoded depth can be used for image enhancement for either human interpretation or computer processing. The strengths and limitations of the approach will be enumerated and potential applications presented.
White light speckles observed when the object is coated with retroreflective paint and illuminated by an ordinary slide projector or other white light source are used for object contouring using the defocus effect. Due to a defocussed recording of the speckles on the object it is seen that the high frequency components of the speckles are eliminated due to the enlarged size of speckles. A suitable filter on the Fourier plane of a single exposed speckle pattern of a curved object can deli neate the regions above and below the cutoff frequency as dark and bright areas respectively. A digital counterpart of this effect is also demonstrated.
Small angle moire has been shown to be very useful in contouring objects with steep slopes and protrusions. We have considered a system which uses only one lens which serves as both the projection and viewing lens, that minimizes some of the problems associated with producing a small angle moire. The compactness of the design of a single lens system offers potentially higher stability, at the loss of standoff and field coverage from conventional designs. This paper explores the pros and cons of the single lens moire contour method, and discusses new problems of alignment and noise unique to this approach. Finally, the performance results of a test unit will be presented.
In this paper we investigate the shape-from-shading problem when the image and reflectance function are both known to be circularly symmetric. It is shown that the only surfaces that satisfy the image irradiance equation under these conditions are themselves circularly symmetric provided that the surface shape is both continuous and finite.
This paper presents an algorithm for visual surface reconstruction which makes use of a robust local reconstruction scheme to produce a dense disparity map from a multiresolution feature-based stereo matching algorithm. Robustness implies that the algorithm can reject large amounts of outliers in the disparities which are caused by mismatches in the correspondence process while simultaneously preserving discontinuities in the depth. Our robust algorithm uses a standard multiresolution stereo algorithm in conjunction with a moving least median of squares (MLMS) algorithm to fit local planar patches to the disparity functional at each level of the multiresolution pyramid. The MLMS algorithm finds the best fit by minimizing the median of the error between the fit and the data. By applying the MLMS algorithm at each stage of the pyramid we not only create a denser grid at each level but also " nip in the bud" any errors which occur at a coarse level before they are propagated to finer levels of the multiresolution process. Experimental results are presented on real and synthetic data.
In this paper we develop a coherent theory for stereo matching. Unlike previous theories of stereo vision which use either image brightness or features such as edges but not both we combine these primitives for stereo matching. We establish a coherent theory by defining an energy functional for the disparity field and matching elements. By minimizing the energy functional we are able to solve for the disparity field of the stereo pair.
We present a novel holographic system called ODIN to optically perform a fast global search of a fuzzy inference rule-base. The system is based on a neural network model. The rule search is experimentally jemonstrated using an array of computer generated holograms. Two versions of the system are described one hybrid opto-electronic the other all optical.
An optical correlator system is to be interfaced with an existing robot assembly cell to provide the necessary manufacturing machine vision. In operation various parts are scanned by the vision system which provides identification and location of each pre-selected part. This information is forwarded to the robot controller which translates these data to perform mechanically articulated part retrieval placement and fastening to the assembly being manufactured. A tray of parts would be analyzed with individual parts located for use in an ongoing manufacturing process. As new parts arrive optical memories are fabricated off line and subsequently conveyed to the vision system on demand. To enhance parts identification and location under variable ambient conditions the video from the scanned scene is preprocessed to reduce the effects of lighting and contrast variations. Using laboratory and shop data a variety of actual aircraft parts have been used to fabricate matched filters for use in the vision correlator. Tests conducted with the correlator show that individual parts can be identified located and differentiated from groups of parts with similar appearance. As a next step the correlator as a vision system will be configured for interface with a representative robot cell for premanufacturing tests.
In a digital imaging device the maximum resolution that can be achieved is limited to the Nyquist frequency of the sampling grid. We develop in this paper a method by which this constraint may be overcome by recovering aliased information which occurs due to undersampling and thereby restore frequencies beyond the sampling passband. The method relies on acquiring several images of the same scene by varying the optical transfer function of the imaging system. We then solve a set of linear equations that incorporates the degradations due to blurring aliasing and noise of the imaging system. The effectiveness of the technique is demonstrated by presenting restorations of 1-dimensional and 2-dimensional degraded signals. We also discuss the usefulness of the technique for multiresolution coding.
Different schemes for spatial frequency filtering are described. The possibility is shown to visualize phase objects images and to achieve increase or inversion o contrast oC the amplitude objects images.
Defocused monocular views of three-dimensional objects can be used to provide useful range information. In the sensing method presented in this paper two monocular views one highly defocused and the other well-focused are recorded and processed to yield multi-resolution estimates of range information. This method exploits all available a priori information relating to the topological and photometric properties of the object. Results of simulations using synthetic opaque objects are presented.