This PDF file contains the front matter associated with SPIE Proceedings Volume 6762, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
We describe a new algorithm for 3D edge detection on composite part surfaces based upon phase shift analysis. Current phase shift based algorithms generate 3D surface profiles, they do not directly compute 3D edge information. The proposed algorithm has been developed in this context for 3D edge detection. One advantage of this method is its ability to measure smooth 3D edges that cannot be accurately measured using traditional contact techniques. A dense 3D point cloud representing part edges are computed, all such edges in view may be computed simultaneously. The inherent accuracy available with phase shift analysis is leveraged for detecting the smooth edges with minimal error. Experimental results with some test parts are presented.
Depth From Defocus (DFD) is a depth recovery method that needs only
two defocused images recorded with different camera settings. In
practice, this technique is found to have good accuracy for cameras
operating in normal mode. In this paper, we present new
algorithms to extend the DFD method to cameras working in
macro mode used for very close objects in a distance range of
5 cm to 20 cm. We adopted a new lens position setting suitable for
macro mode to avoid serious blurring. We also developed a new
calibration algorithm to normalize magnification of images captured
with different lens positions. In some range intervals with high
error sensitivity, we used an additional image to reduce the error
caused by drastic change of lens settings. After finding the object
depth, we used the corresponding blur parameter for computing the
focused image through image restoration, which is termed as
"soft-focusing". Experimental results on a high-end digital camera
show that the new algorithms significantly improve the accuracy of
DFD in the macro mode. In terms of focusing accuracy, the RMS error
is about 15 lens steps out of 1500 steps, which is around 1%.
Accurate 3D shape measurement via fringe analysis methods requires the determination of the absolute phase map of the
object. In this paper, we present a novel method for absolute phase retrieval developed for use with the Fourier transform
method for fringe analysis. A cross-shaped marker is embedded in the fringe pattern that is projected to the object. The
position of the marker in the captured fringe image is detected and later used in calculating the absolute phase map. For
phase analysis of the fringe image, the marker is removed and the sinusoidal intensity distribution of the fringe pattern is
restored before the Fourier transform method is applied. This paper focuses on the concept of absolute phase retrieval
from a single fringe pattern as well as techniques on marker detection and removal. Experimental results on absolute
phase retrieval and 3D reconstruction are also presented to show the feasibility of the proposed method.
Common image edge detectors identify a mixture of image discontinuities caused by a local change of illumination,
texture or geometry. This document describes a method to separate depth-edges as a special instance of
geometry edges from all other edges in a single image, without a complicated sensor. A single color camera and
a red, green and blue light is used for scene illumination. Color shadows produced by the active illumination
provide discriminative features to detect depth-edges. Experimental results are used to demonstrate the discriminative
power of the proposed method and the performance of the depth-edge detection has been studied
analytically for different illumination conditions.
This paper discusses issues related to accurate measurement using multiple cameras with phase-shifting techniques. Phase-shifting methods have been widely used in industrial inspections due to high accuracy and excellent tolerance to surface finish. But so far, most such systems use only one camera. In our applications to inspect manufactured part with complex shapes, one camera cannot capture the whole surface because of occlusions, double bounced light, and the limited dynamic range of cameras. Multiple cameras have to be used and the data from different cameras must be merged together. Because different cameras have individual error sources when a part is to be measured, it is a challenge to obtain the same shape, in the same 3D coordinates system from all cameras without data manipulation such as iterative registration. This paper addresses this challenge of data registration. The error sources are analyzed and demonstrated and several paths for error reduction are presented. Experiment results show the significant improvement obtained.
For a 360-deg 3D measurement of an object the optical 3D sensor scan the object from different positions and the
resulting single patches have to transform into a common global coordinate system so that these point clouds are
patched together to generate the final complete 3D data set. Here we summarize and give some system realizations
for the method, which we called "method of virtual landmarks" /1, 2/ realizing this local-global coordinate
transformation without accurate mechanical sensor handling, sensor tracking, markers fixed on the object or point-
cloud based registration techniques.
For this the calculation of the co-ordinates, orientation of the sensor and local-global coordinate transformation is
done by bundle adjustment methods, whereby the pixel of the so called connecting camera form 'virtual landmarks'
for the registration of the single views in order to obtain a complete all around image. The flexibility makes the
method useful for a wide range of system realizations which will be shown in the paper, like robot guided, handheld
/3/ and tripod based systems for the flexible measurement of complex and/or large objects.
Three-dimensional data merging is critical for full-field 3-D shape measurement. 3-D range data patches, acquired either
from different sensors or from the same sensor in different viewing angles, have to be merged into a single piece to facilitate
future data analysis. In this research, we propose a novel method for 3-D data merging using Holoimage. Similar to the
3-D shape measurement system using a phase-shifting method, Holoimage is a phase-shifting-based computer synthesized
fringe image. The virtual projector projects the phase-shifted fringe pattern onto the object, the reflected fringe images are
rendered on the screen, and the Holoimage is generated by recording the screen. The 3-D information is retrieved from the
Holoimage using a phase-shifting method. If two patches of 3-D data with overlapping areas are rendered by OpenGL, the
overlapping areas are resolved by the graphics pipeline, i.e., only the front geometry can been visualized. Therefore, the
merging is done if the front geometry information can be obtained. Holoimage is to obtain the front geometry by projecting
the fringe patterns onto the rendered scene. Unlike real world, the virtual camera and projector can be used as orthogonal
projective devices, and the setup of the system can be controlled accurately and easily. Both simulation and experiments
demonstrated the success of the proposed method.
Our interest is in data registration, object recognition and object tracking using 3D point clouds. There are three steps to our feature matching system: detection, description and matching. Our focus will be on the feature description step. We describe new rotation invariant 3D feature descriptors that utilize techniques from the successful 2D SIFT descriptors. We experiment with a variety of synthetic and real data to show how well our newly developed descriptors perform relative to a commonly used 3D descriptor, spin images. Our results show that our descriptors are more distinct than spin images while remaining rotation and translation invariant. The improvement in performance incomparison to spin images is most evident when an object has features that are mirror images of each other, due to symmetry.
We propose a compact measurement system for surface profilometry using a MEMS scanner. Beam from a LD is scanned by a miniaturized MEMS mirror
with the size of 4mm×3mm (or 6mm×7mm) produces an optically sectioned line profile of a sample. Hence, if we scan the beam vertically and horizontally by this two-dimensional type of MEMS scanner, the optical sections of the sample object
are formed, and the scanned result can be caught by a CCD camera and stored to be analyzed by a PC. The feature of this MEMS scanner is in that the mirror is magnetically driven at the resonant frequency. Therefore, due to resonance effect, even this small mirror brings a large scanning angle, high-speed scanning, low noise and low power consumption. The miniaturized light-weight design is also applied to realize the compact measurement system. The principle for measurement known as "triangulation" is very simple, but high accuracy is expected thanks to the recent development of sub-pixel technique. At this point of time, we are fabricating a proto-type equipment for the experimental use and, in near future, we will try to attain a compact three-dimensional measurement system using this scanner and a small bright LED light source.
Inner profile measurement has a lot of request for the applications in field of mechanical industry and even in the medical and dental fields. We proposed measurement method of inner diameters of pipes and/or holes using a ring beam device which consists of a conical mirror and a laser diode. This measurement method is based on optical sectioning of inner wall. This optically sectioned profile is analyzed to calculate the inner diameter and/or profile. Here, an optical instrument with a simple and compact configuration is reported for the inner profile measurement. As experimental results, we show performance of the instrument and some examples for inspection of mechanical components.
Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved.
The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.
In order to effectively map fine surface structure ranging from surface finish at the sub-micron level to surface defects which can be millimeter size, methods are needed that can provide sub-micron resolution, but also have sufficient measurement range to see much larger features In the past, this nitch has been addressed with the use of white light interferometry that can be mechanically scanned in depth to provide mappings of structures on a very fine scale. However, such methods are limited to lab situations due to stability requirements, and are not fast enough to be used for a shop floor decision. We propose a system that uses a hybrid of classical laser interferometry for the fine structure, but adds in phase shifted structured light for a coarser measurement within the same data set. We will explore the pros and cons of this approach, and the limitations on the overall system imposed by each method.
This study investigates the feasibility of remote quality control using a host of advanced automation equipment with Internet accessibility. Recent emphasis on product quality and reduction of waste stems from the dynamic, globalized and customer-driven market, which brings opportunities and threats to companies, depending on the response speed and production strategies. The current trends in industry also include a wide spread of distributed manufacturing systems, where design, production, and management facilities are geographically dispersed. This situation mandates not only the accessibility to remotely located production equipment for monitoring and control, but efficient means of responding to changing environment to counter process variations and diverse customer demands. To compete under such an environment, companies are striving to achieve 100%, sensor-based, automated inspection for zero-defect manufacturing. In this study, the Internet-based quality control scheme is referred to as "E-Quality for Manufacturing" or "EQM" for short. By its definition, EQM refers to a holistic approach to design and to embed efficient quality control functions in the context of network integrated manufacturing systems. Such system let designers located far away from the production facility to monitor, control and adjust the quality inspection processes as production design evolves.
Recent time-of-flight (TOF) cameras allow for real-time acquisition of range maps with good performance.
However, the accuracy of the measured range map may be limited by secondary light reflections. Specifically,
the range measurement is affected by scattering, which consists in parasitic signals caused by multiple reflections inside the camera device. Scattering, which is particularly strong in scenes with large aspect ratios, must be detected and the errors compensated. This paper considers reducing scattering errors by means of image processing methods applied to the output image from the time-of-flight camera. It shows that scattering reduction can be expressed as a deconvolution problem on a complex, two-dimensional signal. The paper investigated several solutions. First, a comparison of image domain and Fourier domain processing for scattering compensation is provided. One key element in the comparison is the computation load and the requirement to perform scattering compensation in real-time. Then, the paper discusses strategies for improved scattering reduction. More specifically, it treats the problem of optimizing the description of the inverse filter for best scattering compensation results. Finally, the validity of the proposed scattering reduction method is verified on various examples of indoor scenes.
To achieve object matched inverse patterns for the profilometric measurement of the shape of manufactured surfaces, in general at first the shape of a faultless reference object must be known. For these purposes, a separate profilometric measurement cycle employing the reference object as specimen can be executed, if a reference object of adequate quality is available. Based upon this first measurement, an object matched inverse pattern is calculated, and the shape of an arbitrary specimen can then be compared to the ideal shape of the reference object. This paper proposes an algorithm to find inverse patterns for analytically known reference objects based upon ray tracing techniques without requiring a physically existing reference. It further provides methods for the automated derivation of setup parameters for a general profilometric measurement setup.
Modern optical methods such as digital shearography have attracted interest not only for laboratory investigations but also for applications on the factory floor because they can be sensitive, accurate, non-tactile and non-destructive. Optical inspection and measurement systems are more and more used in the entire manufacturing process. Shearography as a coherent optical method has been widely accepted as a useful NDT tool. It is a robust interferometric method to determine locations with maximum stress on various material structures. However, limitations of this technique can be found in the bulky equipment components, the interpretation of the complex shearographic result images and a barely solvable challenge at the work with difficult surfaces like dark absorbing or bright reflecting materials. We report a mobile shearography system that was especially designed for investigations at aircraft constructions. The great advantage of this system is the adjusted balance of all single elements to a complete measurement procedure integrated in a handy body. Only with the arrangement of all involved parameters like loading, laser source, sensor unit and software, it is feasible to get optimal measurement results.
This paper describes a complete mobile shearographic procedure including loading and image processing facilities for structural testing and flaw recognition on aircrafts. The mobile system was successfully tested, e.g. with the up-to-date EADS multi-role combat aircraft Eurofighter.