Phase shift measurement on a Moire image is a very effective approach for gathering a full-field 3D data. The limitations of this approach include: (1) The data is gathered over multiple images as the Moire pattern is phase shifted (one of the gratings is shifted for each of the multiple images). Thus, the data gathering can be effected by things such as motion blurring due to the requirement for multiple images. (2) The phase measurement has a two pi ambiguity which makes it difficult to analyze data with step discontinuities. To eliminate the need to take multiple images as the grating is shifted, we have developed a refractive element system that simultaneously produces multiple Moire images of the same scene. The system is adjusted so that each of the simultaneous images provide a different Moire phase. From these multiple simultaneous images, accurate subfringe information can be extracted using standard phase calculating techniques. An added advantage of this optical design is that the images have a stereo disparity which is a function of the distance from the lensing system. This stereo disparity can be used to eliminate the two pi ambiguity problem that plagues other phase calculating techniques. This presentation reviews the optical arraignment that provides the multiple simultaneous Moire images and presents a mathematical description.
Image defocus analysis (IDA), image focus analysis (IFA), and stereo image analysis (SIA), are integrated for recovering the three-dimensional (3D) shape of objects. Integrating IDA, IFA, and SIA, has important advantages because IDA and IFA are less accurate than stereo but they do not suffer from the correspondence problem associated with stereo. Therefore, a rough 3D shape is first obtained using IDA and IFA without encountering the correspondence problem. The amount of computation and accuracy at this stage is optimized by using IDA first and then the IFA. Accuracy is further improved by projecting a high contrast pattern on to the object of interest. The rough shape thus obtained is used in a stereo matching algorithm to solve the correspondence problem efficiently. The amount of computation in matching is reduced since the search for correspondence is done in a narrow image region determined by the approximate shape. Also, the knowledge of approximate shape improves the matching accuracy by minimizing false matches due to occlusion. The method for integrating IDA, IFA, and SIA is presented in detail. The method has been implemented on a vision system named SVIS. Experimental results of the method are presented.
The reconstruction of three-dimensional (3D) information from defocused image data is formulated as an inverse-problem that is solved through a regularization technique. The technique is based on modeling the sensing of defocused images in a camera system using a three-dimensional (3D) point spread function (PSF). Many images are acquired at different levels of defocus. The difference (mean-square error) between this acquired image data and the estimated image data corresponding to an initial solution for 3D shape is minimized. The initial solution for 3D shape is obtained from a focus and defocus analysis approach. A regularization approach that uses a smoothness constraint is proposed to improve this initial solution iteratively. The performance of this approach is compared with two other approaches: (1) gradient descent based on planar surface patch approximation, and (2) a local error minimization based on a limited search. We exploit some constraints such as the positivity of image brightness unique to this problem in the optimization procedure. Our experiments show that the regularization approach performs better than the other two and that high accuracy is attainable with relatively moderate computation. Experimental results are demonstrated for geometric optics model of 3D PSF on simulated image data.
Testing has shown significant dependence between the length of the scale artifact and the achievable system uncertainty for high accuracy industrial videogrammetry on high aspect ratio objects. Shop practice traditionally allows scale artifacts under 1/5 the object length. This practice can lead to higher than expected uncertainties because the uncertainty of the metric defining the physical scale must be multiplied by the ratio of object length to scale artifact length. This relationship is incorporated into the U95 uncertainty relationship. Test cases validating the uncertainty model are also presented. A network of scale reference points can also be integrated on an object using a laser tracking interferometer. Testing results show a significant reduction in total uncertainties when using this network to define the scale for videogrammetry applications. Measuring a 500-inch (12.7 meters) object and scaling the survey with a 130-inch (3.3 meters), scale bar produced an uncertainty of 22-ppm. When the 500-inch object survey was scaled with laser tracker data, the system yielded an uncertainty of 9-ppm.
The all optic fiber phase-shifting electronic speckle pattern interferometry (ESPI) is studied in this paper, which has the following advantages: (1) low cost; (2) reduction of the unreliable factors generated by separated optic components; (3) simplification of the optic configuration; (4) great reduction of volume; (5) flexibility, to be easily designed into different structures to adapt to inaccessible environments such as pipeline cavity and so on. All optic fiber ESPI inspection systems, sensitive to in-plane and off- plane displacement, are presented and practical measurement has been carried out in defect testing of carbon fiber material and crack testing. The results are satisfactory.
Heating and cooling air for an aircraft interior is transported using metal ducts. These ducts vary in size from a few centimeters to 20 centimeters in diameter. In the assembly of aircraft components, a coupling is swaged onto the ducts. In assuring the mechanical dies are operating properly, the crimp mark is checked. The current method of visual inspection and checking with calipers does not allow implementation of statistical process control methods. In an effort to improve this process check, a new measurement method is being developed. A feasibility study indicated that a structured light laser system would be a good approach. A few requirements were: it must be portable to be used at different locations within the fabrication area, it must be fast, it should be easy to use by the mechanic/inspector, the readings must be accurate, and the system is non destructive. Due to the mechanical configuration of the tube and coupling, a camera with magnification optics is used. The measurement of the bump has a maximum of 50.8 microns (0.0020 inch). The system uses computer vision, and custom software written in C++. A low cost frame grabber is used. This paper shows the final production prototype system and its configuration for factory testing. This paper discusses the design and testing of the system.
The dynamic grid projection method for measuring 3-dimensional surface profiles utilizes a rotating grid as the structured light source. This technique has several advantages over other triangulation methods, such as the ability to determine the fringe order (and therefore the range) of each image pixel independently, with no relative motion between the sensor and test object. Because of the high data rate required to sample the signal from a rotating grid, dynamic fringe projection methods have typically been limited to single-point measurements using photodiode sensors. Also, the complex signal analysis algorithms used have limited the resolution and speed of the technique. This paper describes a prototype system developed with a photodiode sensor array imager and simplified digital signal processing algorithms. The use of a photodiode array gives the imaging system sufficient bandwidth to measure significant regions of the surface simultaneously. By extracting only the essential elements of the data from each pixel's output, such as the number of fringes crossed and the crossing with the maximum duration, an accurate surface measurement can be made with a simple search algorithm. The theoretical resolution of the system is limited by the speed of the imaging array, the accuracy of the grid's rotation, and the imaging optics.
A structured-lighting reflection technique was developed in order to detect and measure small wavinesses and curvature defects on specular free-form surfaces. It can reconstruct the 3D relief of specular free-form surfaces and display the curvature at each point. A calibrated camera observes the reflection of a retro-illuminated LCD panel through the free- form surface. The use of a coded lighting technique and the knowledge of the setup geometry allow to locate each observed point on the LCD panel. Using the principle of inverse ray tracing, a surface modelled with Bezier polynomials is fitted to the observed data. Unlike structured-lighting projection techniques which are directly sensitive to the topography of the surface to be inspected, the structured-lighting reflection techniques are essentially sensitive to the gradient and thus enable the detection and measurement of curvature defects which are imperceptible using the projection techniques.
Rapid, three-dimensional profilometry with resolution similar to that of mechanical coordinate measuring machines has long been a goal of vision system developers. Some success has been had using structured light projectors, flying spot scanners and the like. However, these techniques are limited by restrictions on the height variation and stability of the target object. This paper describes a phase measuring projected fringe interferometer that overcomes many of these problems. Using data from a high speed mega pixel class camera viewing high precision spatial modulation of a periodic illumination pattern on the target, new software unwraps the surface phase information and rapidly computes a true three dimensional surface. An important capability of the software is to avoid errors due to islands of missing data or high slope regions on the target. Interchangeable camera lenses permit measurements of a wide range of object sizes with a height resolution ratio on the order of 20 microns per meter of test piece size. The current application of the instrument is measurement of structural deflections of hypersonic aircraft structural components. In previous work, the technique has successfully measured the profile of coins and jet engine turbine blades and the curvature of a human spine. We summarize the special qualities of this instrument that make it well suited to such a wide range of measurements. Finally, we discuss some preliminary experimental results and compare them with typical accuracy requirements.
In this paper a laser triangulation scanning method for the non-contact measurement of scratches on aircraft skins and window surfaces is described. According to analyses of diffusing and reflecting properties of the measured surface, the correct setting of the laser triangulation probe is given. The adjusting posture of probe method and the spreading surface method have been developed to improve the measurement accuracy. The scratch depth measuring system (SDMS) is made. It is characterized by working distance 38 mm, measurage range 5 mm, simple structure as well as low cost. The experimental results are given.
In this paper an integrated scanning laser sensor system with large working distance for 3-D free form measurements is described. The sensor system consists of one diode laser light source and four position sensitive device (PSD) detectors. By the Lambertin model for diffusing surface, the fundamental analysis is performed. The working distance is 150 mm - 200 mm and measuring range is 80 mm. Therefore, the scanning measurement can be realized almost without Z axial tracing. Meanwhile the surface normal direction that is the inclination angles along the X axial and Y axial can be calculated at the same time. According to the inclination data, the measurement error can be compensated by verification database. Computer simulation and preliminary experimental results are also given in this paper.
Structured light illumination has been used for several decades to extract three-dimensional information from surface topology. Most of the research and development has been in the light structuring methodology and the electronic processing while a unified theoretical description has been lacking. With the advent of programmable spatial light modulators having high frame rates, structured light illumination methods using spatial and temporal patterns are practical. We present structured light systems using spatial light modulation as communication systems and use communications theory in their description. This theory is applied to the specific method of successive binary light striping and the tradeoffs between surface encoding quality and processing speeds are discussed. Shannon's theorem of channel capacity provides an objective measure to evaluate some of these tradeoffs and compare a variety of different approaches to structured light illumination. Another result of our analysis is the unification of structured light projection with pattern recognition. Methods of image recognition using Fourier expansion via orthogonal pattern projection are presented. The results of this analysis establish some physical limitations which guide us to effectively utilize both the methodology and the technology applicable to both 3-D data acquisition as well as pattern recognition. Both numerical and experimental results are presented to demonstrate concepts.
This paper describes a method for the treatment of non- fullfield images by means of the Fourier fringe analysis technique. With this type of images, the borders of the data appear as high frequencies in the Fourier domain while the areas with the same intensity (background) appear as low frequency components. This information interferes with the signal peak and leads to large errors in the phase result throughout the image. This paper proposes a two dimensional mapping operation to overcome the problem. By a mapping of the original image into a square, the image can be processed by conventional FFA. The final result is obtained by the application of the inverse of the mapping function.
The control of optical distortion is useful for the design of a variety of optical systems especially those used for laser scanning. The optics used for focusing a laser beam onto a flat image field must satisfy the f-(theta) condition (the image height is proportional to the input field angle) to produce a constant scan velocity across the image plane. We analyze and compared the optical performances produced by several single surface reflector as line scanning distortion corrector. Our guideline is a distortion less than 0.1% typically used in the industry. For particular reflector we are always able to find the right position for the scanning mirror to produce in the image field a correct scan line. Our results show a linear dependence of the maximum scanning angle which the errors is less than 0.1% and the scanning beam f- number. At the same time, the mirror f-number is inversely proportional to the scanning beam f-number. Then bigger is the mirror f/#, smaller is the size of the scanning spot and the scan angle. We compute four graphs that can be used easily and gives a complete picture of the performance produced from a single reflecting surface as distortion corrector.
In this communication we propose a design combining the advantages of the space invariance of telecentric triangulation with high relative lateral resolution and large measuring volume at the same time. Because the scan motion of the laser beam is decoupled from physical transport of the sensor head, this enables the fast scan in large volume. However we need a large aperture optics as large as the scan areas. We used a liquid mirror as aperture for this scanner. The surface of a spinning reflecting liquid takes the shape of a paraboloid that can be used as a reflecting mirror. This very old and nearly forgotten concept as recently been revived, with success. Low costs, large sizes, high optical qualities are the main advantages of liquid mirrors. The main limitation of liquid mirrors come from the fact that the optical axis must be aligned vertically and cannot be tilted. The prototype involves a stand-off distance of 1.5 meters, a scan length up to 1 meter (telecentric), a depth of view of 1 meter and a relative depth resolution of 1 mm (can be less). The design is based on the auto-synchronized scanner and is well corrected for field scanning distortion (f-0).
Speckle caused by the use of laser sources is a well known phenomena. For some applications, the presence of the speckle is used as part of the method, but in others it is purely a source of noise. In the case of line based laser gages, speckle is typically of considerable concern. Certainly, using white light sources or LEDs with short coherence is one way around this problem, but such methods also loose some of the valuable properties of laser line projection such as very narrow lines, the ability to create multiple lines by diffraction, and high signal to background through the use of bandpass filters to view only the laser wavelength. There have been a number of valuable tools introduced that help reduce the problem of speckle from laser sources, without giving up all the advantages of the laser itself. This paper reviews the pros and cons of a number of these methods, and suggests a specific set of tools that are specific to laser line projection.
A vision-drive automatic digitization process for free-form surface reconstruction has been developed, with a coordinate measurement machine (CMM) equipped with a touch-triggered probe and a CCD camera, in reverse engineering physical models. The process integrates 3D stereo detection, data filtering, Delaunay triangulation, adaptive surface digitization into a single process of surface reconstruction. By using this innovative approach, surface reconstruction can be implemented automatically and accurately. Least-squares B- spline surface models with the controlled accuracy of digitization can be generated for further application in product design and manufacturing processes. One industrial application indicates that this approach is feasible, and the processing time required in reverse engineering process can be significantly reduced up to more than 85%.
Application of current 3-D laser scanning systems to reverse engineering is limited by two obstacles. The meticulous guidance of the laser scanner over the surface of the object being scanned and the segmentation of the cloud data which is collected by the laser scanner. Presently, both obstacles are being manually solved. The guidance of the laser scanning sensor at the correct surface to sensor distance is dependent on operator judgement and the segmentation of the collected data is reliant on the user to manually define surface boundaries on a computer screen. By applying a 2-D CCD camera, both of these problems can be resolved. Depth information on the location of the object surface can be derived from a pair of stereo images from the CCD camera. Using this depth information, the scanner path can be automatically calculated. Segmentation of the object surface can be accomplished by employing a Kohonen neural network into the CCD image. Successful segmentation of the image is conditional on the locations selected to start neural nodes as well as the prevention of the neuron connectors from bleeding onto neighboring patches. Thus the CCD camera allows for the automatic path planning of the laser scanner as well as the segmentation of the surface into patches defined along its natural boundaries.
We propose a new method for the registration of several views acquired by a range finder, for rapid prototyping applications. This method works well with free-form surfaces, and does not need an initial approximate relative positioning of the two data sets. The algorithm works along the following line: First approximate each data file with a multi-resolution quadric spline. This is achieved using fast filtering techniques. Second compute the gross rigid motion. This is achieved through a hypothesis/verification algorithm based on the multi-resolution quadric spline patches. This step minimizes a robust criterion, based on the least median of squares, which is tolerant to up to 50% of false matches. Last run an iterative closest point procedure. This local iterative scheme refines the gross rigid estimate, and gives the final registration. The method is tested on real 3D data. The front and side views of a human face are registered in order to build a more complete 3D model.
Shape modeling is a very important issue for many studies, for example, object recognition for robot vision, virtual environment construction, and so on. In this paper, a new method for obtaining polyhedral model from multiview images using genetic algorithms (GAs) is proposed. In this method, a similarity between model and every input image is calculated, and then the model which has the maximum similarity is found. For finding the model of maximum similarity, genetic algorithms are used as the optimization method. In the genetic algorithm, the sharing scheme is employed for efficient detection of multiple solution, because some shape may be represented by multiple shape models. Some results of modeling experiments from real multiple images demonstrate that the proposed method can robustly generate model by using the GA.
One of the road blocks on the path of automated reverse engineering has been the extraction of useful data from the copious range data generated from 3-D laser scanning systems. A method to extract the relevant features of a scanned object is presented. A 3-D laser scanner is automatically directed to obtain discrete laser cloud data on each separate patch that constitutes the object's surface. With each set of cloud data treated as a separate entity, primitives are fitted to the data resulting in a geometric and topologic database. Using a feed-forewarn neural network, the data is analyzed for geometric combinations that make up machine features such as through holes and slots. These features are required for the reconstruction of the solid model by a machinist or feature based CAM algorithms, thus completing the reverse engineering cycle.
A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.
Classical machine vision is difficult to be used as practical 3D vision because it loses depth when projecting 3D objects into 2D images and then has ambiguity when reconstructing 3D object. In contrast, range images, i.e. 3D images, keep the invariance properties of 3D objects, which makes the image processing tasks easier. This paper develops the prototype of spatial encoder using laser scanning method. Range images are obtained based on encoded images produced by our encoder.
On condition that CCD is used as the sensor, there are at least five methods that can be used to realize laser's direction finding with high accuracy. They are: image matching method, radiation center method, geometric center method, center of rectangle envelope method and center of maximum run length method. The first three can get the highest accuracy but working in real-time it is too complicated to realize and the cost is very expansive. The other two can also get high accuracy, and it is not difficult to realize working in real time. By using a single-chip microcomputer and an ordinary CCD camera a very simple system can get the position information of a laser beam. The data rate is 50 times per second.
This paper presents the development of a hardware and software system suitable for the three-dimensional (3-D) digitization and computer modeling of objects intended for manufacture through mold making via CNC machining or rapid tooling systems. The hardware sub-system is comprised of a three- dimensional (3-D) machine vision sensor integrated with a computer numerically controlled (CNC) machine tool. The software sub-system consists of modeling very large 3-D data sets (termed cloud data) using a unified, non-redundant triangular mesh. This representation is accomplished from the 3-D data points by a novel triangulation process. A case study is presented that illustrates the efficacy of the technique for rapid manufacture from an initial designer's model.
A new time-of-flight-based imaging lidar was designed to carry out measurements in the 5 to 100-m range. The concept of the device, along with an illustration of the design and a presentation of our sensor breadboard is described. Test results showing sensor performance are also presented and discussed. The device is primarily developed for space applications, but it can also be used in numerous industrial shape and profile measurements.