3D laser scanners are often configured to acquire data over a wide field of view at constant scanning speeds. When features to be analyzed are arranged in a dense, irregular pattern the most efficient scanning method utilizes such a scanning systems, ideally with a scan width matched to the object size. However, a class of 3D imaging applications require high speed acquisition of information from localized regions at speeds which may exceed video frame rates. In this paper scanning technologies are surveyed which are matched to the latter requirements. Fundamental limitations are identified and performance parameters analyzed including rise time, number of spots, linearity, and maximum line rate for each of the low inertia, 'addressable' deflectors.
Laser scanning systems using diffractive scanning mechanisms must address the spectral purity of their laser sources. If the source is a laser diode, then certain spectral behavior characteristics must be modeled and accounted for as part of the design. Ability to directly modulate laser diode drive current, while generally a hallmark of this type of device, can be a key contributor to focal spot quality degradation due to the diffractive mechanism interacting with the laser modal transients. In this paper the system image quality as a function of the transient and steady state source spectral characteristics as well as diffractive scanner type is discussed. Methods are described which can achromatize these scanning systems to varying degrees by performing filtering or compression on the diode's emission spectrum.
Three-dimensional imaging systems may be configured to provide a high figure of merit based upon speed, accuracy, and cost. Typically the object is imaged with a line scan arrangement which, depending upon the application, may have disadvantages of lack of symmetry and associated constraints imposed by directional illumination and viewing. Furthermore, a potential compromise often exists between scan efficiency and accuracy caused by unnecessary data acquisition in regions not containing information of interest. This paper illustrates a method for 3D scanning in a non-orthogonal system, particularly applicable for inspection of micro-electronic circuits and patterns, which mitigates the above mentioned tradeoffs. The use of optimized scanning patterns with high speed, addressable beam deflectors is discussed. Improvements associated with the imaging system are quantified with respect to standard line scan configurations at similar pixel rates.
Among basic design issues for laser-based triangulation systems are imaging geometry considerations which can seriously impact the 3-D measurement accuracy performance. Among these are numerical aperture, sensor orientation, telecentricity, and triangulation angle. While system designers specifying these parameters generally take into account certain first-order design objectives such as requirements for field-of-view and optoelectronic signal- to-noise, they may overlook other imaging error sources such as certain aberrations, defocus, multiple reflections, and measurement non-linearity which are equally integral to providing a high level of 3-D measurement accuracy. These errors are described here in relation to triangulation imaging geometry constraints, and corresponding design trade-offs to minimize the errors are discussed.
This paper describes a feasibility study done to demonstrate `ultra-high' resolution imaging with a triangulation-based 3D laser scanner. General requirements for micron-level 3D inspection applications are discussed, followed by a proposal of a laser triangulation-based system to address these requirements. Some fundamental limits and trade-offs of 3D imaging are reviewed, and methods for overcoming technical challenges are discussed. Finally, images from a prototype scanning system demonstrating sub-micron resolution at rates faster than 1 Mhz are presented.
Tools are now available for measurement imaging system designers to model optical distortion thoroughly and reduce it through design choices. In the case of modern automated systems, however, it is frequently more feasible to calibrate out the non-linearities and perform a real- time error correction operation. This paper presents, for the case of triangulation line scan or line-of-light imagers, some design trade-offs affecting distortion and general measurement linearity and analyzes the form of post-image correction method required. The 2D, cubic nature of a general distortion characteristic might typically mandate using a 2D correction value matrix, but one can exploit the inherent 1D nature of line-scan triangulation imaging in many cases to reduce the correction array to a 1D vector. Graphics of distortion simulations and empirical mappings are presented, and applicability of the analysis to several triangulation implementations is discussed.
Coherent fiber optic image guides are often utilized in remote imaging systems when space is restricted, articulation is required, or for operation in hazardous environments. Conventionally, the object under test is illuminated with incoherent light, and the resulting image is transmitted to a linear or area sensor array where the image is scanned. A compact and rugged fiber optic coupled 'full field" laser scanning system could also provide benefits for certain demanding inspection tasks, particularly for acquisition of three dimensional information or for meeting very wide dynamic range requirements. In such applications the object is scanned and a point sensor or linear array is used for light gathering. In this paper fundamental factors limiting image quality and resolution are identified with empirical results used to establish a performance benchmark. The image sampling approach is critical for obtaining useful data in a high speed system. A method for real time image sampling is proposed which may also prove useful for multi-fiber beam delivery systems. The sampling method also provides several practical benefits required for high speed operation in potentially rugged and harsh environments.
Keywords: laser scanning, remote sensing, 3D imaging, fiber optics.
Photometric accuracy is a measure of how well a detector signal voltage represents the surface reflectance at the corresponding object region for a specific illumination condition. A linear relationship should exist between the signal voltage and the product of the object reflectance and source irradiance at a point of interest. Furthermore, the measured voltage should represent only the point of interest with negligible contribution from surrounding points. This paper describes the class of machine vision applications for which photometric accuracy is important which includes color and height measurement. Recognizing that a monochrome or color CCD array is the usual choice of image sensor, error sources are reviewed. One error which has not received much attention which can severely degrade performance is CCD veiling glare. Experimental results are shown which quantify this error and an empirical model developed to represent the degradation of photometric measurement performance.
Progress in the development of 3D systems for inspection and measurement has resulted in new systems using several imaging techniques. Requirements for sub-pixel inspection accuracy are now common throughout the industry, mandating a thorough examination of sensor performance limits. The biggest challenge for any 3D system is accurate measurement of object location and height when the intrascene dynamic range is large. This paper examines several fundamental sources of error in 3D systems, particularly imaging errors found near object edges. The results are important for development of 3D metrology system specifications.
Signal processing systems used for color measurement laser range finding or background subtraction often compute the value of a parameter of interest by division of signals obtained from multiple sensor channels. Although the noise statistics of a single channel may often be accurately modeled as a linear transformation of a Gaussian random process the computation of a ratio constitutes a nonlinear estimation problem which may be particularly difficult to analyze in closed form. This paper demonstrates the use of a computational based model for estimating the output distribution and statistics of a ratiometric processing system. Examples will show the error performance of the signal processor with decreasing input signal to noise ratio the overall result being a bias in the estimate in addition to the expected increase in the sample variance. Extension of the analysis approach to other nonlinear systems will also be discussed.