PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Heating and cooling air for an aircraft interior is transported using metal ducts. These ducts vary in size from a few centimeters to twelve centimeters in diameter. In the assembly of aircraft air ducts, coupling is swaged onto the ducts. In assuring the mechanical dies are operating properly the die parting welt mark is inspected. The current method of visual inspection and checking with calipers does not allow implementation of statistical process control methods. In an effort to improve this process check, a new measurement method is being developed. A feasibility study indicated that a structured light laser system would be a good approach. A few requirements are: it must be portable to be used at different locations within the fabrication area, it must be fast, it should be easy to use by the mechanic, the reading must be accurate, a hard copy print out is required, and it must be non-destructive. Due to the mechanical configuration of the duct and coupling, a camera with magnification optics is used. The measurement of the bump has a maximum height of 50.8 microns. The prototype systems uses computer vision and custom software written in the C language. This paper discusses different measurement methods tested and the benefits of each technology. The development of a specialized system is justified for production use. This paper describes the prototype system and some of its configuration for factory testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ISICL sensor is a recently described measurement device for sensing and mapping the temporal and spatial distribution of isolated submicron particles in semiconductor processing plasma chambers, fluid tanks, and other inaccessible or hostile places. It requires no modifications to the chamber, and senses the volume directly over the wafer, while the process is running. Its detection sensitivity is extremely high: even in a very bright plasma, it requires only 50 scattered photons to detect a particle at a false alarm rate of 10-5 Hz. Here we present theoretical and experimental results for the sensitivity and volumetric sampling rate of the sensor, as well as a method of using the measured pulse height histogram to obtain particle size information, and some practical tests of performance versus window quality and back wall material.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We will present a new optical bore gage design based on, forming a ring of light on the inner surface of a bored hole and then superimposing arc sections of that ring onto one magnified video camera image which is fed to a computer for analysis. This unique 3D machine vision system provides a high-precision measurement of diameter and other geometric properties of the bore. THe presentation will outline the optics, describe the processing, and review calibration issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An industrial application of phase-shifting shadow moire interferometry for automatic 3D inspection of fine objects is presented. A line grating is used to generate shadow type moire fringes whose relative phases are readily determined by implementing the principle of phase shifting so that the surface height of the object can be measured. A special phase-measuring algorithm, named the A-bucket algorithm, is used which can precisely computes the relative phases even though there exists a significant level of errors in phase shifting due to miscalibration and external vibration. Finally, several experimental cases are discussed to demonstrate that a measuring accuracy in the order of 0.001 mm can practically be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new principle formula of optical triangulation displacement measurement has been built by the use of the theory of light scattering from object surface. The new formula has been compared with the existing formula based on the geometric optics. The main factors affecting the precision of displacement measurement has been analyzed with the new formula. The preliminary experiment results about the new formula have been reported. Some methods which can be used to reduce and correct the measurement error have been advanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variable resolution video machine vision systems has been built which generates twice the number of depth contours for a given grating spacing as a conventional moire system. This variable resolution system uses a Mach-Zender interferometer to project interference fringes onto a reference surface and onto a target surface. Video images of the two structurally illuminated surfaces are mixed in a video mixer with the resulting output being moire contours which are the intersections of the two surfaces. If the reference surface is a flat plate, we get equal depth contours of the target surface, and if the reference surface is a perfect target, we get error map contours of the 3D shape differences between the two targets. Theoretical analysis has shown that if the reference surface upon which the gratings are projected is the inside of the actual surface to be inspected, then the moire depth contours are twice as dense as would be observed with a flat reference surface. This surprising result is experimentally demonstrated both for a perfect target and for a target with 3D shape errors. Real time error maps of damaged targets made using this technique have many moire contours outside the area of interest, but this 'non-information' can be greatly reduced by video or computer subtraction of the perfect target images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D surface geometry is a critical parameters in a part assembly process. The general technique of stereo imaging can provide non-contacting dimensional measurement by triangulation to determine coordinates of selected points on an object, from which dimensional information can be derived. A laboratory stereo imaging system employing tow CCD cameras has been developed to achieve accurate, high precision dimensional measurement. This system allows flexible camera placement and easy calibration. It resolved the stereo correlation problem by utilizing a laser scanner to mark measurement points with high-intensity, small- diameter light spots. This system can be implemented in- process for a variety of surface geometry inspections under rugged manufacturing conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to develop a non-contact profile sensor system which will be able to accurately measure 3D free-form machined metal surfaces. The proposed sensor system has many advantages as compared with conventional measuring systems. First, a new detecting system of optical ring images utilized by rotating image detector is developed to measure 3D profiles in the long measuring range with high accuracy. Second, processing time can be shortened within 0.5 second by using the proposed detecting system. Third, the speckle noise is eliminated effectively by the rotating mechanism. Finally, it is concluded that this sensor system makes it possible to measure the profiles within an accuracy of +/- 50 micrometers in the measuring range of 150 mm. In this paper, the measurement principle of the proposed sensor system is analyzed and the performance of the system is experimentally measured and discussed not only for both diffuse reflection surface and specular reflection surface, but also for reduction of the laser speckle noise which has direct influence on the measurement accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a method for obtaining accurate 3D measurements using a temporally encoded structured light system. An objective of the work was to have a balance in the accuracy of all components in the system. This was achieved by including lens distortion in the modes for both the camera and projector which comprise the structured light system. In addition, substripe estimation was used to estimate projector stripe values as a complement to subpixel estimators used for locating image features. Experimental evaluation shows that it is important to use substripe estimation and incorporate lens distortion in the projector model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of 3D methods for such applications as feature locations within a wide field-of-view, such as for automated guided vehicles or large assembly work, offers some distinct challenges. The use of stereo viewing has often been the method of choice due to the wide area coverage and hardware simplicity. However, stereo based methods suffer from a loss of spatial position resolution for more distant object as compared to close objects due to the high demagnification needed to cover large fields-of-view. A long depth-of-field in such systems may also degrade the general ability to perform correlations due to poor focus. In addition, stereo looses distance resolution for features nearing the line of the two cameras, typically requiring movement of the cameras. The paper presents a novel method of obtaining 3D scene information as seen from the center of a cylindrical field. The method described uses a single camera with a view that is rotated through 360 degrees by means of a continuously rotating mirror. The viewing systems uses a constant field of view optical system that provides a constant X-Y resolution of features in the scene over depths of several meters. Comparing successive images with the readout from an encoder on the rotating mirror generates all locations of objects within a limited height cylinder. This paper will discuss the sources of errors and typical capabilities of this approach in light of a real-time part location tracking application useful in assembly systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A noncontact 3D imaging technique based on tunable lasers is investigated to assess its performance compared to commercially available methods. In this technique, an object is flood illuminated by an external cavity tunable diode laser. As the laser frequency is scanned, the time-varying speckle-intensity pattern provides information about the depth of the scattering object. The patterns are recorded with a CCD camera, and the object's height profile is then extracted from the 3D fast Fourier transform. This paper presents the first quantitative comparison of results using this technique with those from a well-known standard instrument--the coordinate measuring machine. The object used for the comparison is a pressed sheet-metal part with dimensions of approximately 1-- by 100 by 20 mm. We found the standard deviation (sigma) of the difference between the two profiles to be less than 0.2 (Delta) z, where (Delta) z is the raw range resolution of the speckle-pattern-sampling technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D laser scanners are often configured to acquire data over a wide field of view at constant scanning speeds. When features to be analyzed are arranged in a dense, irregular pattern the most efficient scanning method utilizes such a scanning systems, ideally with a scan width matched to the object size. However, a class of 3D imaging applications require high speed acquisition of information from localized regions at speeds which may exceed video frame rates. In this paper scanning technologies are surveyed which are matched to the latter requirements. Fundamental limitations are identified and performance parameters analyzed including rise time, number of spots, linearity, and maximum line rate for each of the low inertia, 'addressable' deflectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method for determining the actual area of a region of a macroscopic surface. The method entails determining a statistical distribution of a surface's limiting inclinations, and then computing from these data the incremental surface area that is due solely to its highest frequency oscillations. The area so determined is added to the incremental area that is due solely to the next group of lower frequency oscillations. The sum of these terms is equated to the actual relative area of the region. The procedure consists of determining the surface coordinates of a region of a surface at successively different distances between its coordinate points in the reference plane. The data is acquired by a mechano-optical system, that was developed in our laboratory, that is described elsewhere. Inclination angle distribution are determined from these data by dividing the area into small triangular regions, and then calculating the angles made between the average planes of these triangular regions and the reference plane. These are found to vary as a function of the average areas of the regions or the distances between coordinates. However, when the fractions of each inclination angle are plotted against the distance between coordinates in the reference plane, two linear curves are obtained. The limiting distributions are determined from the intercept values obtained on extrapolating the curves for the more closely spaced data to 'zero' distance between points. The incremental areas due to the next group of lower frequency oscillations are computed from the coordinates spaced at the closest interval of the second linear region of the fraction versus distance curves. In addition to the area determinations, we also determined the average ratios of actual to nominal profile lengths, RL, for orthogonal directions on the respective v surface. The RL values were found to depend on the direction of the traces. For one of the directions, the RL values were found to correlate with the surface areas of the respective materials, determined at the same scale as that used to determine the RL values. These values differed however, from the actual surface areas of the materials that were determined, as described above.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Whatever laser range-finding technique is used, the operation principle is based on optical information coding and transmission. Indeed, the optical head represents the common point within the whole of telemetric methods. So, its precise characterization presents a major interest. The emission part is performed by a laser diode and the reception by an avalanche photodiode coupled with filtering and RF amplification. The optical path is along coaxial alignment if an appropriate geometry is chosen. The whole systems may be considered as a quadriple with the emitter as the input and the receptor as the output. Therefore, the network analyzer is particularly adapted to a global study of such a quadripole. With a measurement protocol, the performances of the optical head can be then evaluated in terms of optical and electronic properties. Nevertheless, according to the great variety of angles to be measured, many range-finding methods can be used, as the frequency modulated continuous wave (FMCW) radar ranging method, the phase-shift measurements, either with heterodyne method or direct synchronous detection. Whereas two of these, the FMCW or the heterodyne phase detection, have already been developed in our laboratory, we only introduce as a new method the synchronous detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality control tools in manufacturing industry would be significantly enhanced by the development of methods for the consistent and reliable classification of 3D imaged machined surfaces. Such tools would boost the capability of manufacturers to carry out inter- and intra-surface differentiation during the manufacturing phase of component life-cycles. This paper presents an approach to such a classification based on artificial neural networks (ANN). ANN techniques are increasingly sued to resolve demanding problems across the spectrum of engineering disciplines. They are particularly suited for handling classification problems, especially those dealing with noisy data and highly non-linear relationships. Furthermore, once trained, their operation gives them a distinct speed advantage over other technologies. In this paper, the authors use adaptive resonance theory and back propagation neural networks to classify a number of machined surfaces. The authors compare the results with those obtained from conventional methods to determine the effectiveness of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common approach to structured light illumination is light stripe projection onto a surface topology and then analyzing the lateral displacements of the reflected pattern to reconstruct the surface topology. A single spatial frequency of a light stripe pattern may be used to illuminate a relatively flat surface. In the case of rough surfaces, the surface topology is encoded with a sequence of light stripe patterns with successively higher spatial frequencies. In both approaches, the maximum resolution is limited by the maximum spatial frequency used. However, the tradeoff between SNR blurring and spatial frequency limits the final reconstruction accuracy. That is, as spatial frequency increases, the projection systems's blurring function causes the light stripes to be coupled thereby decreasing the SNR of the reflected image. We present both mathematical and numerical models for this phenomenon which indicates that by laterally moving the light stripe pattern across the surface and optimally thresholding the image, we can achieve measurement density and accuracy beyond that achieved by increasing the frequency of a stationary light stripe pattern. the numerical model to be calibrated will accept experimental data. Theoretical and numerical results will be compared with experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image focus analysis is an important technique for passive autofocusing and 3D shape measurement. Electronic noise in digital images introduces errors in this techniques. It is therefore important to derive robust focus measures that minimize error. In our earlier research, we have developed a method for noise sensitivity analysis of focus measures. In this paper we derive explicit expressions for the root-mean square (RMS) error in autofocusing based on image focus analysis. This is motivated by the autofocusing uncertainty measure (AUM) defined earlier by us as a metric for comparing the noise sensitivity of different focus measures in autofocusing and 3D shape-from-focus. The RMS error we derive by us has the same advantage as AUM in that it can be computed in only one trial of autofocusing. We validate our theory on RMS error and AUM through experiments. It is shown that the theoretically estimated and experimentally measured values of the standard deviation of a set of focus measures are in agreement. Our results are based on a theoretical noise sensitivity analysis of focus measures, and they show that for a given camera the optimally accurate focus measure may change from one object to the other depending on their focused images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach is proposed for highly accurate reconstruction of 3D shape and focused image of an object from a sequence of noisy defocused images. This approach unifies the two approaches: image focus analysis and image defocus analysis, which have been treated separately in the research literature so far. In the new unified approach, high accuracy is attained at the cost of increased data acquisition and computation. This approach is based on modeling the sensing of defocused images in a camera system. A number of images are acquired at different levels of defocus. The resulting data is treated as a function sampled in the 3D space where x and y are the image spatial coordinates and d is a parameter representing the level off defocus. The concept of a '3D point spread function' in the space is introduced. The problem of 3D shape and focused image reconstruction is formulated as an optimization problem where the difference between the observed image data and the estimated image data is minimized. The estimated image data is obtained from the image sensing model and the current best known solutions to the 3D shape and focused image. An initial estimation to the solutions is obtained through traditional shape-from-focus methods. This solution is improved iteratively by a gradient descent approach. This approach reduces the errors in shape and focused image introduced by the image-overlap problem and the non- smoothness of the object's 3D shape. Experimental results are presented to show that the new method yields improved accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An invariant related to Gaussian curvature at an object point is developed based upon the covariance matrix of photometric values within a local neighborhood about the point. We employ three illumination conditions, two of which are completely unknown. We never need to explicitly know the surface normal at a point. The determinant of the covariance matrix of the intensity three-tuples in the local neighborhood of an object point is shown to be invariant with respect to rotation and translation. A way of combing these determinant to form a signature distribution is formulated that is rotation, translation, and scale invariant. This signature is shown to be invariant over large ranges of poses of the same objects, while being significantly different between distinctly shaped objects. A new object recognition methodology is proposed by compiling signatures for only a few viewpoints of a given object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple reflections can degrade significantly depth data acquired by laser rangefinders. However, analysis of the polarization state of the reflected light can provide additional clues to distinguish the first and subsequent reflections. Such estimates depend critically on the ratio between absolute differences of intensity measured by a camera with different settings of some polarizing optical components. Errors introduced by weak signals, the nonlinear response of the sensors, noise and quantization in the video channel can be very significant. When a laser is used as the illumination source, fluctuation of the source intensity can introduce additional errors if comparisons are made between images acquired at different times. We presents a discussion of the type and effects of these errors, guidelines on systems settings and an upper-bound analysis of the error on the estimates of the orientation of the completely polarized component of light, a fundamental quantity used in several polarization vision systems for diverse applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser scanning systems using diffractive scanning mechanisms must address the spectral purity of their laser sources. If the source is a laser diode, then certain spectral behavior characteristics must be modeled and accounted for as part of the design. Ability to directly modulate laser diode drive current, while generally a hallmark of this type of device, can be a key contributor to focal spot quality degradation due to the diffractive mechanism interacting with the laser modal transients. In this paper the system image quality as a function of the transient and steady state source spectral characteristics as well as diffractive scanner type is discussed. Methods are described which can achromatize these scanning systems to varying degrees by performing filtering or compression on the diode's emission spectrum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently only contact systems--where a probe touches the part on its surfaces--have the resolution required for inspecting in the manufacturing industry. in this paper, the development of a non-contact structured light machine vision (SLMV) inspecting system using structured light imaged by a machine vision camera given fast inspection at high resolution. The key technology of this system is inspecting simulation software using a geometric model of the inspected part. The geometric model eliminates spurious range data from multiple reflections that has plagued previous SLMV systems. In addition, the new system eliminates 99 percent of the data to be studied in detail, reducing both the pixels acquired and the pixels analyzed to just those which contribute to determining the part's dimensions. Resolution is improved by averaging many points over part's surface. Experimentally, a single-axis machine measured several dimensions of a part in less than a second to micron resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with a fast rangefinder based on laser scanning space-encoding method. In the first place the principle of image-encoding is introduced and the calculation formulas of measurement values in 3D rectangular coordinate system are given. In the second place the design rules are put forward at the angle of the measuring volume, shading light and received light power properties. At last the measurement errors caused by quantization errors, the resolutions and their errors are discussed through error analysis, and then the design rules to reduce effect of quantization errors are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a non-contact laser sensor for pipe inner wall inspection using the circular optical cutting method. The sensor which consists of laser diode light source, an optical ring pattern generator and a CCD camera principally. The optical ring pattern generator forms the light from the laser diode as a circular light to project on the pipe inner wall. Then the reflected or the scattered light by the pipe inner circumference surface is imaged on the CCD camera. The fundamental performance analysis and simulation according to the theory of the light reflecting- scattering shows that it is possible to select the propose parameters. Using this sensor mounted on an automated moving systems on trial, the pipe interior diameter between 80mm to 160mm can be inspected within accuracy of 0.2mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.