Image stitching is a cost-effective way to expand the field-of-view of imaging system. The traditional homography-based image stitching uses a global homography transformation matrix for image transformation, which is stable, but only works well for flat scenes, relative far scenes or the scenes which are captured by the camera with rotation only. The AsProjective-As-Possible and Content-Preserving-Warping methods, which are realized by mesh optimization, improve the stitching result to a certain degree, but there is obvious ghost in the near scenes or images which have relatively large parallax. In this paper, an image stitching method which utilizes depth information and mesh optimization is proposed. The feature points are detected and then clustered, and the depth information are used to assign weights to each mesh to compute homography for each mesh respectively. Experiments show proposed method has better results than other methods.
Proc. SPIE. 11455, Sixth Symposium on Novel Optoelectronic Detection Technology and Applications
KEYWORDS: Clouds, 3D modeling, Distance measurement, Computer programming, 3D metrology, 3D acquisition, Vegetation, Remote sensing, Data acquisition, Databases
Accurate 3D point cloud acquisition of plant leaves has been widely found in the field of vegetation structure modeling, which is further critical in quantitative remote sensing. Owing to the occlusion between the plant leaves and the limited performance of 3D data acquisition sensors, the acquired leaf point cloud may be incomplete. It is necessary to complete the partial leaf point cloud by some means. Existing point cloud completion methods include registration methods, geometry-based methods and database-based methods, which are time consuming and less effective.This paper proposes a method of plant leaf point cloud completion by using deep Encoder-Decoder framework. The encoder reads incomplete plant leaf point cloud into a shape feature vector and the decoder is trained to predict the complete leaf point cloud. The loss function consists of forward loss and backward loss. For further study, a leaf point cloud dataset is established. The data enrichment is performed by random rotation, random occlusion, random transformation of point cloud sequence, so that the dataset is more representative. The experimental results show that the missed leaf point cloud can be well completed. Meanwhile, the proposed method can directly operate on raw point cloud with less computation and is robust to noisy point cloud.
Proc. SPIE. 11205, Seventh International Conference on Optical and Photonic Engineering (icOPEN 2019)
KEYWORDS: Clouds, 3D modeling, Data modeling, 3D image processing, Feature extraction, RGB color model, 3D metrology, Data fusion, Image fusion, Inspection
Surface defect recognition is used to test product’s quality. The current way of recognition is traditional 2D imagebased method. But 2D image lacks 3D information which results in false inspection and missed inspection, which has become a bottleneck of current classification model. Because of the recent rapid development of 3D measurement technology, we can apply 3D data information in surface defect detection to improve the recognition ability of defects. We propose a new convolutional network model to identify surface defects, and realize the feature depth fusion of 3D point cloud and 2D image in the model. In this work, we introduce an attention network to extract features from a 3D point cloud to generate a 2D attention mask. The high quality feature map is produced by combining the 2D attention mask with a 2D image. We further merge the attention network and the classification network into a single network. The attention network is used to analyze which part of the image should be more concerned by the classification network. Therefore, mutual learning of 2D data and 3D data is realized in the training process, which reduces the dependence on the number of samples and enhances the generalization performance of the model. Experiments on the defect dataset verify that our method can improve the classification effect of the model.
An automatic parts dimension inspection method based on scanned point cloud is proposed in this paper. Firstly, the point cloud and the CAD model are registered in the CAD model coordinate system by using Fast Global Registration algorithm. Then, by developing the dedicated inspection program based on the 3D modeling software, the dimensions of the CAD model and the features associated with these dimensions are retrieved, which includes edges, planes, cylinders, spheres, etc., and the dimensions also include the information such as the dimension type, the dimension symbol, the references of the dimension. And then, under the guidance of the dimension information extracted from the CAD model, the corresponding features in the point cloud can be extracted by using Random Sample Consensus algorithm. Finally, the dimensions associated with the extracted features can be calculated by fitting the point cloud features into geometric elements and then calculating the corresponding distance. The whole inspection procedure can be accomplished without human interaction. The feasibility and the accuracy of the proposed method is verified by carrying out the experiment of measuring the industrial parts.
Point spread function (PSF) of projector plays an important role in coaxial projection-imaging profilometry and binary defocusing fringe projection profilometry. In the proposed method, the SPI (Single Pixel Imaging)-based PSF measurement method has been used to obtain the PSF of the projector point by point. We use the camera to capture the SPI patterns projected by the projector on a white plane. By considering each pixel of the camera as a single-pixel detector, we can apply SPI technology to the camera pixels and acquire the light transport coefficients between the object points on the white plane and image points of projector, which is the spatially varying blur. Owing to the characteristic of SPI, the proposed method could obtain the spatially varying blur of every pixel directly. The experiment also verified that the proposed method could provide a more accurate blur kernel than the traditional Gaussian blur kernel to fit the blur model of the camera lens.
In the image-based industrial inspection field, the imaging distance between the scene and the camera is relatively short, and the field-of-view of the imaging system is too small to meet the requirements of detection. So a close-range image stitching method is needed to get high quality and large field-of-view images. The traditional image stitching method uses a global homography transformation matrix for image stitching, which is stable, but only suitable for flat scenes, remote scenes or the scenes which are captured by the camera with rotation only. The As-Projective-As-Possible and Content- Preserving-Warping methods, which are realized by mesh optimization, improve the stitching result to a certain degree, but there will still have obvious ghost for the close-range scenes and images which have relatively large parallax. In this paper, an image stitching method which utilizes depth information and mesh optimization is proposed. The feature points are detected and clustered, and the depth information and grouping points are used to assign weights to each mesh to compute homography for each mesh respectively. Other state-of-the-art methods are compared with our method, it can be seen that the proposed method can get a better result.
The blur of the optical system can cause inevitable degradation of acquired images. In this paper we present a novel method to measure the spatially-varying blur of the camera lens. We obtained the Discrete Cosine Transform (DCT) coefficients of the blur kernels by applying DCT single-pixel imaging to all the camera pixels. The spatially-varying blur kernels are then reconstructed by applying inverse DCT to the acquired coefficients. Experimental results show that the proposed method can acquire a more accurate blur kernel compared to the traditional Gaussian kernel.
Traditional optical 3D shape measurement methods, such as light stripe triangulation, binary coding, and fringe projection, cannot acquire complete and correct 3D measurement results in the presence of interreflections. In this research, a 3D shape measurement method in the presence of interreflections based on light stripe triangulation is presented. The wrong measurement results caused by interreflections are excluded by the geometric constraints introduced by an additional camera. Each 3D point reconstructed by light stripe triangulation is projected onto the image plane of the additional camera to determine whether the 3D point is correct measurement result. Experimental results demonstrate that the proposed method can measure 3D shape in the presence of interreflections.
In the modern industrial manufacturing, how to effectively obtain the three-dimensional data of the parts profile is the key component for precision test and subsequent analysis. A light-duty design scheme for optical vision probe, which can be installed with a PH10T motorized probe head in CMM, is discussed in this paper. The optical probe can overcome several defects of the traditional measurement mode of CMM, such as poor efficiency and sparse point cloud. Therefore, the problem of 3D measurement and quality analysis for complicated parts can be solved. To splice data in different fields of view, a registration method using a new designed artifact is proposed. Experiments demonstrated the feasibility of the designed non-contact CMM integrated with optical 3D probe for precise 3D shape measurement. The measurement uncertainty of the optical probe can reach 0.012mm within the measuring volume width 200mm and the measurement uncertainty of the global 3D measurement is less than 0.03mm in 1500mm.
Proc. SPIE. 11053, Tenth International Symposium on Precision Engineering Measurements and Instrumentation
KEYWORDS: 3D metrology, 3D acquisition, Parallel computing, Heterodyning, Clouds, Phase measurement, Cameras, 3D image processing, 3D displays, Image acquisition
When fringe projection profilometry is applied for real-time 3D shape measurement, several problems remain to be solved such as multi-wavelength heterodyne phase unwrapping is sensitive to motion and the computation cost is high. In this paper, a real-time 3D shape measurement method with optimized multi-wavelength heterodyne phase unwrapping and GPU parallel computing is proposed. Experimental results demonstrate that the proposed method can acquire 3D shape at 40 fps. Dynamic object with discontinuities can be measured and the phase unwrapping mistakes are eliminated by smoothing the phase of beat frequency during multi-wavelength heterodyne phase unwrapping.
Point cloud is an important data type for representing the geometric characteristics of industry elements. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images and apply existing mature deep learning framework on it. In this paper, we proposed a deep metric learning based network which projects point sets into embedding space to pull the intra-class samples closer and push the inter-class samples far away. To further facilitate the future research on this problem, a new dataset (Industry Element 8) containing 8 industry elements cloud point is built. Experimental results have demonstrated the superior performance of our proposed learning network.
Defects detection of high-speed train bogies by image processing has the advantages of non-contact, fast and high precision. Small defects can be identified and located quickly by comparing the current captured image with the previously saved defects-free image. Actually, the viewing angles and positions of images taken on the same location at different time are usually different. So image registration is needed before the image comparison. This paper proposes an image registration based on binocular stereo matching. The two images taken under different viewing angles or positions can be considered as a stereo image pair. The proposed method utilizes the results of image feature matching and the corresponding essential matrix to obtain the 3D coordinates of feature points. And the 3D coordinates of non-feature points can be estimated by those of adjacent feature points’. By this way, each single pixel on the current image can be re-projected into the normal image, and image registration is accomplished. Compared with the traditional methods, the result image of the proposed approach is more accurate.
Image stitching is to create a wider viewing angle image with high quality from a series of images which have overlapping regions. It is one of the most important fields of image processing. The traditional global homography method, such as AutoStitch, will be invalid when the scene is not planar or the views differ not purely by rotation. The local homography warping method, which is based on the grid optimization algorithm, such as as-projective-as-possible(APAP) warping can get a better result relative to global homography method, but it deeply relies on the quality and quantity of matching points. In this paper, a new method for low texture scene stitching was proposed which combines point features and line features to compute local warping matrix. So the method can get enough features in low texture region. Our results are compared with APAP and AutoStitch method. The results show that our method have less ghosting and deformation.
Single-pixel imaging (SPI) is a new method to obtain an image using a detector without spatial resolution. Owing to the excellent characteristics of anti-noise and high signal-to-noise ratio, SPI is applied to detect and locate the target region in the week illumination condition. In most previous target detection and location approaches, the original target needs to be imaged. However, the time consumption of image reconstruction for SPI is much larger than conventional imaging method, which indicates a low efficiency for target region location using SPI. In this paper, we propose a target region location method based on Fourier single-pixel imaging to locate the target without retrieving target image. The proposed method adopts the Fourier single-pixel imaging to obtain few Fourier coefficients of the target image, then the target region is located by the central slice theorem and edge detection algorithm. Experiment shows the proposed method has an excellent characteristic of low time consumption and can effectively locate the target region.
Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.
Since standard parts are necessary components in mechanical structure like bogie and connector. These mechanical structures will be shattered or loosen if standard parts are lost. So real-time standard parts inspection systems are essential to guarantee their safety. Researchers would like to take inspection systems based on deep learning because it works well in image with complex backgrounds which is common in standard parts inspection situation. A typical inspection detection system contains two basic components: feature extractors and object classifiers. For the object classifier, Region Proposal Network (RPN) is one of the most essential architectures in most state-of-art object detection systems. However, in the basic RPN architecture, the proposals of Region of Interest (ROI) have fixed sizes (9 anchors for each pixel), they are effective but they waste much computing resources and time. In standard parts detection situations, standard parts have given size, thus we can manually choose sizes of anchors based on the ground-truths through machine learning. The experiments prove that we could use 2 anchors to achieve almost the same accuracy and recall rate. Basically, our standard parts detection system could reach 15fps on NVIDIA GTX1080 (GPU), while achieving detection accuracy 90.01% mAP.
In the trends of intelligent manufacturing, fringe projection profilometry is one of the most widely used techniques for obtaining three dimensional (3D) cloud of parts. However, this measuring technique may introduce interreflections by parts with strong reflection, which leads to phase calculation and 3D point reconstruction mistakes. In this paper we proposed adaptive regional projection method to measure interreflections area of parts with strong reflection. For a parts to be measured, we detect the surfaces of interreflections on it and give their pose in optical measurement system firstly. Then the system measure interreflections areas one surface by one surface. We measure complete cloud of a parts with strong reflection as the experiment, which illustrates our method is feasible.
The fringe projection technology is widely used in 3D measurement fields. However when the technology is applying to translucent objects, the subsurface scattering and absorbing always leads to a decline of the measurement accuracy. The aim of this paper is to propose a dual-direction fringe projection method in order to obtain an more accurate measurement result for the translucent objects as while as change the whole measurement system little and do not reduce the measuring rapidity. The paper mainly includes three parts: (1) The principle of dual-direction fringe projection method and different forms of dual-direction fringe; (2) Analysis of the different effect for the measurement accuracy brought by different factors; (3) Experiments for artificial tooth by various dual-direction fringes and accuracy analysis. The experiment results showed that by this method it is possible to improve the measurement accuracy for the translucent objects.
Nowadays, 3D measurement and re-construction technologies are widely used not only in industry area, but also in the appreciation and research of ancient architecture and historical relics. Many methods are used for the architecture measurement in large scale, but as for the details of architecture or precision historical relics, these methods meet difficulties. Thus, historical relic objects with specular surface or complex sculptural surface could not be measured by traditional method. Focusing on these problems, this paper proposed 3D measurement technique which contains two levels of measurement. Firstly, when measuring ancient architecture in large scale, laser scanning and photometry methods are used. Then, when measuring details of architecture, a fast and adaptive 3D measurement system is used. Multi-view registration is also used for the measurement of hollowed-out structure of sculptural relics. The experiments indicate that the system can achieve 3D measurement and re-construction of different types of ancient architecture and historical relics.
Large-scale separated surface is very common in modern manufacturing industry. The measurement of the flatness of such surfaces is one of the most important procedures when evaluating the manufacturing quality. Usually, the measurement needs to be accomplished in an in-situ and non-contact way. Although there are many conventional approaches such as autocollimator, capacitance displacement sensor and even CMM, they can not meet the needs from the separated surfaces measurement either because of their contact-nature or inapplicable to separated surfaces. A non-contact large-scale separated surfaces flatness measurement device utilizing laser beam and laser distance sensor (LDS) is proposed. The laser beam is rotated to form an optical reference plane. The LDS is used to measure the distance between the surface and the sensor accurately. A Position Sensitive Detector (PSD) is mounted with the LDS firmly to determine the distance between the LDS and the reference plane and then the distance between the surface and the reference plane can be obtained by subtracting the two distances. The device can be easily mounted on a machine-tool spindle and is moved to measure all the separated surfaces. Then all the data collected are used to evaluate the flatness of these separated surfaces. The accuracy analysis, the corresponding flatness evaluation algorithm, the prototype construction and experiments are also discussed. The proposed approach and device feature as high accuracy, in-situ usage and the higher degree of automatic measurement, and can be used in the areas that call for non-contact and separated surfaces measurement.
In dental restoration, its important to achieve a high-accuracy digital impression. Most of the existing intraoral measurement systems can only measure the tooth from a single view. Therfore,if we are wilng to acquire the whole data of a tooth, the scans of the tooth from multi-direction ad the data stitching based on the features of the surface are needed, which increases the measurement duration and influence the measurement accuracy. In this paper, we introduce a fringe-projection based on multi-view intraoral measurement system. It can acquire 3D data of the occlusal surface, the buccal surface and the lingual surface of a tooth synchronously, by using a senor with three mirrors, which aim at the three surfaces respectively and thus expand the measuring area. The constant relationship of the three mirrors is calibrated before measurement and can help stitch the data clouds acquired through different mirrors accurately. Therefore the system can obtain the 3D data of a tooth without the need to measure it from different directions for many times. Experiments proved the availability and reliability of this miniaturized measurement system.
Proc. SPIE. 9276, Optical Metrology and Inspection for Industrial Applications III
KEYWORDS: Clouds, Vegetation, Ions, Feature extraction, Image registration, 3D metrology, 3D modeling, Data modeling, Principal component analysis, Remote sensing
Measuring the 3D structure of vegetation canopy is of great significance for the validation of remote sensing data and vegetation radiation transfer modeling. When using laser triangulation, because of the limitat ion of the field of view of measuring system, multi-view measurement followed by a registration of the measured point cloud is needed. Most of the existing registration methods cannot be directly applied to our registration task. This paper presents a registration method based on leaf profile matching. Firstly, segment the point cloud into subsets that are correspond to the single leaves. Then the profile of every single leaf is extracted and fitted with splines. Finally, by calculating and matching the parameters of the splines' parameters, the profile of the same leaf in different views are registered, thus the registration of multi-view point cloud is achieved. The experiments on measurement data are presented to show the feasibility of the proposed method.
A three-dimensional shape measurement system based on fiber-optic image bundles was proposed to measure
three-dimensional shape of object in confined space. Fiber-optic image bundles have the advantage of flexibility.
Firstly, based on the principle of phase-shifting and advantages of fiber-optic image bundles, the mathematical
model of the measurement system was established, hardware and software platform of the system was set up. Then,
the problems of calibration and poor quality images brought by fiber-optic image bundles were analyzed, after
which a viable solution was proposed. Finally, experiments for objects in confined space were performed by using
the three-dimensional shape measurement system. As the transmission media of the system, fiber-optic image
bundles could achieve picture’s flexible acquisition and projection. The three-dimensional shape of the object was
reconstructed after data processing of images. Experimental results indicated that the system was miniature and
flexible enough to measure the three-dimensional shape of objects in confined space. It expanded the application
range of structured-light three-dimensional shape measurement technique.
Proc. SPIE. 8769, International Conference on Optics in Precision Engineering and Nanotechnology (icOPEN2013)
KEYWORDS: 3D metrology, 3D scanning, 3D acquisition, 3D image processing, Computing systems, 3D modeling, Phase shifts, Aluminum, Cameras, Manufacturing
With the development of manufacturing industry, the in-situ 3D measurement for the machining workpieces in CNC machine tools is regarded as the new trend of efficient measurement. We introduce a 3D measurement system based on the stereovision and phase-shifting method combined with CNC machine tools, which can measure 3D profile of the machining workpieces between the key machining processes. The measurement system utilizes the method of high dynamic range fringe acquisition to solve the problem of saturation induced by specular lights reflected from shiny surfaces such as aluminum alloy workpiece or titanium alloy workpiece. We measured two workpieces of aluminum alloy on the CNC machine tools to demonstrate the effectiveness of the developed measurement system.
The antenna which is used in space for satellite control command communication and data transmission is a key unit for
a satellite to work properly and accomplish the task successfully. Accurately measuring the antenna reflector shape and
the reflector distortion shortly after the antenna manufacturing or assembling on the satellite is very important to ensure
that the antenna functions well. Considering the constraints during the measurement, an antenna reflector shape and
distortion measuring system, which is based on the close-range photogrammetry, is proposed. The system configuration,
measuring principles, calibration and measuring procedures, data processing, experiment configuration and results as
well as error analysis are discussed in the paper. The system was constructed and tested in the laboratory environment.
The experiment results show that the system has the ability of accurately measuring the shape of the reflector. The
distortion of the reflector surface can then be gained from the shape data. The average accuracy of measurement about
240 points on a 600mm antenna reflector is less than 0.015 mm (1σ).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.