PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12524, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for robust equipment that allows identifying defects or measuring small displacements in a harsh environment has been a requirement in the aeronautical and principally in the oil and gas industry. In this field, Digital Speckle Pattern Interferometry and shearography had been the optical techniques more used. Recently, advances in the process of phase images through multiple carrier frequencies had allowed compact optical configurations that can combine multiple acquisitions or even multiple techniques in an simple process of capture. This article shows the different applications, versatility, and compactness of the use of carrier frequencies through the multiple aperture principle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tricky part in deploying AI models to production is often training, with two important prerequisites: One, training data must be representative of the data that the algorithm will see in later use. And two, training data must be properly labeled manually. In algorithms for automated optical inspection, there is a further problem: What if there are only a few examples of specific defect types?
We tackled these problems with different strategies when developing our ARGOS system for scratch-dig inspection. We will present real-world examples of how AI algorithms can be used for defect detection and classification without large training databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reliably detecting or tracking 3D features is challenging. It often requires preprocessing and filtering stages, along with fine-tuned heuristics for reliable detection. Alternatively, artificial intelligence-based strategies have recently been proposed; however, these typically require many manually labeled images for training. We introduce a method for 3D feature detection by using a convolutional neural network and a single 3D image obtained by fringe projection profilometry. We cast the problem of 3D feature detection as an unsupervised detection problem. Hence, the goal is to use a neural network that learns to detect specific features in 3D images using a single unlabeled image. Therefore, we implemented a deep-learning method that exploits inherent symmetries to detect objects with few training data and without ground truth. Subsequently, using a pyramid methodology of rescaling each image to be processed, we achieved feature detections of different sizes. Finally, we unified the detections using a non-maximum suppression algorithm. Preliminary results show that the method provides reliable detection under different scenarios with a more flexible training procedure than other competing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our previously proposed pixel-wise structured light system calibration method works well within a depth range. Yet, it is difficult to be employed for large depth range calibration because making a calibration target to achieve high-calibration accuracy is non-trivial. This paper discusses two possible means to address this challenge, adapting the standard feature detection algorithm and employ another step to create pixel-wise error functions by measuring a flat surface. Experimental results demonstrate that either method could further improve measurement accuracy of the pixel-wise calibration method, especially for large depth range measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The depth range that can be captured by structured-light 3D sensors is limited by the depth of field of the lenses which are used. Focus stacking is a common approach to extend the depth of field. However, focus variation drastically reduces the measurement speed of pattern projection-based sensors, hindering their use in high-speed applications such as in-line process control. Moreover, the lenses’ complexity is increased by electromechanical components, e.g., when applying electronically tunable lenses. In this contribution, we introduce chromatic focus stacking, an approach that allows for a very fast focus change by designing the axial chromatic aberration of an objective lens in a manner that the depth-of-field regions of selected wavelengths adjoin each other. In order to experimentally evaluate our concept, we determine the distance-dependent 3D modulation transfer function at a tilted edge and present the 3D measurement of a printed circuit board with comparatively high structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the demand for faster and more accurate 3D shape measurement with portable devices increases due to the growing interest in augmented reality (AR) and virtual reality (VR), there is a need to explore the potential use of mobile phones for this purpose. While some phones already use lidar for 3D shape measurement, the resolution is still too low to enable meaningful AR or VR. Among all 3D shape measurement techniques, structured light systems offer high resolution and accuracy. In this research, we utilize a MOTO Z model mobile phone, which includes both a high-resolution camera and a digital light processing (DLP) projector. The DLP projector used for digital fringe projection provides highly accurate 3D geometry of the object being measured.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel method for measuring the size of standard cylinders with the LiDAR and RGB sensors embedded with iPhones. First, we reconstruct 3D points of cylindrical surfaces using the LiDAR data. With 3D point cloud data, we fit the orientation of the cylinder with center pixels. Since the LiDAR does not offer 3D points with high resolution nor high accuracy, we select a segment of the point cloud data and compute the average depth of these segment pixels as the distance from the cylinder to the camera. We then compute the diameter by the geometric relationship of each point on the cylinder. Finally, we improve the measurement accuracy by applying a estimation function. Experimental results show that at distances from 0.3 m to 2 m with different tilt angles, the proposed method can achieve 3 cm cylinder diameter measurement accuracy for the cylinders with a diameter of 8 cm and 14 cm, and 5 cm accuracy for the cylinder with a diameter of 22 cm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Achieving a decarbonized energy sector by 2050 will require the development of cost-effective technologies beyond today’s commercial concentrating solar-thermal power (CSP) technologies. Achieving the 2030 target will depend heavily on reducing the cost of heliostats, while improving technical performance. There are several opportunities for metrology, in addressing heliostat optical error, thus increasing the overall heliostat efficiency. In addition, opportunities in reducing the costs of manufacturing, assembly, calibration, or operations and maintenance exists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a scatterometry solution to the side wall angle measurement problem in high aspect ratios semiconductor structures such as through-silicon vias, deep holes, and deep trenches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low-coherence interferometry is a proven and powerful tool that can be applied for precise measurement of the optical path length between optical surfaces and thus, center thickness and air spacing. An advanced measurement setup allows to measure independent from refractive index information. After an initial calibration, the geometrical thickness of single optical components is measured directly.
Furthermore, this technique can also be used to determine the (group) refractive index during production, e.g., of molded lenses, batch testing of optical materials or evaluate the effect of coatings on low Tg materials, to confirm consistent refractive index information and monitor manufacturing conditions. It can also be used for inverse search in the glass catalogue to identify unknown material.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A configuration for the measurement of thickness changes in materials through one-shot digital speckle pattern interferometry (DSPI) was developed. The phase maps calculation was made by adding carrier fringes by the multiple aperture principle and Fourier Transform Method (FTM). With this setup, interferometry configurations verified that the simultaneous and instantaneous visualization of two opposite faces of a surface is possible. In addition, the combination of the simultaneous results obtained from both sides of the material makes it possible to determine displacements with greater sensitivity or to identify changes in their thickness. The validation and demonstrative tests were carried out with a 1-mm thick aluminum plate with a 5-mm diameter through hole coated. Thickness changes until 2 μm was measured.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The challenges and limitation of using laser triangulation gages on optically rough surfaces are well known. Methods such as rotating polarizers, laser oscillations to reduce coherence and surface movement have been demonstrated with limited success and added limitations. The use of new LEDs with high power by area has opened up new possibilities but can still pickup non-coherent surface texture characteristics. This paper will attempt quantify how these surface textures effects impact the triangulation measurements and explore methods to further reduce their influence on the measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a non-contact continuous respiration monitoring system based on Fringe Projection Profilometry (FPP). The aim is to answer the question whether the FPP system is suitable for continuous monitoring of respiration in terms of accuracy and robustness. A FPP 3D sensor suitable for measuring respiratory motion of the chest wall and abdomen was designed and implemented. The sensor prototype was evaluated based on its temporal and spatial repeatability and resolution in the context of respiratory variations. To extract a respiration rate (RR) and a continuous respiration signal, the resulting 3D images from the sensor were further analyzed. The FPP sensor showed a high signal-to-noise ratio (SNR) of 37 dB when evaluated with an ideal sinusoidal respiration signal. Furthermore, a high mean correlation between the measured respiration signal and the reference signal of 0.95 was achieved across different scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improving the accuracy of structured light calibration methods has led to the development of pixel-wise calibration models built on top of conventional pinhole-camera models. Because phase encodes depth and transversal information, the pixel-wise methods provide high flexibility to map phase to XYZ coordinates. However, there are different approaches for producing phase-to-coordinate mapping, and there is no consensus on the most appropriate one. In this study, we highlight the current limitations, especially in depth range and accuracy, of several recent pixel-wise calibration methods, along with experimental performance verifications. The results show that there are opportunities for further improving these methods to overcome existing limitations from conventional calibration methods, particularly for low-cost hardware
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel phase unwrapping algorithm via depth from focus method in microscopic fringe projection profilometry. The proposed method uses fringe contrast to estimate the rough depth information and determines the fringe orders by geometrical constraint relationships. As a result, it does not require extra patterns or images, enabling a higher 3D imaging speed. Experimental results demonstrate that the proposed method can successfully realize phase unwrapping in microscopic fringe projection profilometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a calibration method to align the coordinate system of each camera together in a multi-camera-projector system using a flat surface object. The method identifies the geometric relationship between each camera’s coordinate system with the physical feature points from the 3D model of the calibration target sharing the minimum absolute bi-directional phase difference. Experimental results shows that pixel-level accuracy of 3D model alignment can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research presents a novel method for calibrating a camera-projector 3D reconstruction system. While the digital fringe projection system is the representative 3D reconstruction method using active illumination, many researchers are still struggling with the complex projector calibration process. To overcome this complexity, the novel approach uses the concept of auxiliary camera to assist the calibration process by adding one more camera only for calibration procedures. Based on the 3D geometry reconstructed using phase-based stereo matching algorithm, the 3D geometry can be defined as the functions of the absolute phase, by building polynomial equations. Experimental results demonstrate that this method presents the same quality of 3D reconstruction resolution, guaranteeing the simplicity of the calibration process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our paper, we discuss applications of projection fringes and structured light surface metrology for measurements of the topography of specular and diffuse surfaces in semiconductor manufacturing. We report recent progress in the projection fringes metrology applied to the packaging of crystalline solar cells, and heat sinks for power devices. In addition, we also discuss the progress of metrology of highly reflective surfaces encountered in thin film solar cell manufacturing and cell phone production. We discuss factors limiting the accuracy and speed of these methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system for determination the distance from the robot to the scene is useful for object tracking, and 3-D reconstruction may be desired for many manufacturing and robotic tasks. While the robot is processing materials, such as welding parts, milling, drilling, fragments of materials fall on the camera installed on the robot, introducing unnecessary information when building a depth map, as well as the emergence of new lost areas, which leads to incorrect determination of the size of objects. There is a problem comprising a decrease in the accuracy of planning the movement trajectory caused by incorrect sections on the depth map because of incorrect distance determination of objects. We present a two-stage approach combining defect detection and depth reconstruction algorithms. The first step is image defects detection based on convolutional auto-encoder (U-Net) and deep feature fusion network (DFFN-Net). The second step is a depth map reconstruction with the exemplar-based and the anisotropic gradient concepts. The proposed modified block fusion algorithm uses a local image descriptor obtained by an automatic encoder for image reconstruction, which extracts image features and depth maps using a decoding network. Our technique outperforms the state-of-the-art methods quantitatively in reconstruction accuracy on RGB-D benchmark for evaluating manufacturing vision systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents a workflow to automatically calculate optimized scanning trajectories for industrial robot system using the estimation of a surface reflectance model of 3-D shape and parameters from multiple views. To solve the problem of determining the views without Lambert's surface, a 3-D reconstruction algorithm based on a convolutional neural network is proposed. In the first step, the encoder is trained for the descriptor description of the input image. In the second step, a fully connected neural network is added to the encoder for regression for choosing the best views. The coder is trained using the generative adversarial methodology to construct a descriptor description that stores spatial information and information about the optical properties of surfaces in different areas of the image. The codec network is trained to recover the defect map (depends directly on the sensor and scene properties) from RGB image. As a result, this method uses nonLambertian properties, and it can compensate for triangulation reconstruction errors caused by view-dependent reflections. Experimental results on both synthetic and real objects show that the proposed method automatically finds trajectories that enable 3-D reconstructions, with a significant reduction of scanning time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.