Three-dimensional (3D) reconstruction based on digital speckle correlation has gained widespread usage in dynamic scenarios due to its ability to achieve single-frame reconstruction. Traditional binocular speckle structured light systems often face challenges such as insufficient surface texture richness and excessive curvature, leading to limitations in the accuracy of corresponding point localization during the matching process. Furthermore, partial point cloud loss caused by varying viewpoints can impact the precision and completeness of 3D reconstruction. This paper introduces a Multiview speckle correlation-based approach for 3D reconstruction. By establishing multi-view stereo camera epipolar rectification, complex speckle matching issues are transformed into one-dimensional searches along unified epipolar lines, simplifying the matching process and enhancing efficiency. Additionally, a digital speckle correlation computational model and subpixel interpolation algorithm based on Newton-Raphson iteration are employed to realize multi-view subpixel-level corresponding point localization. Ultimately, the reconstruction point cloud data is obtained through the intersection of three-ray least squares method. Experimental results demonstrate that this method achieves higher accuracy and better model completeness in 3D data reconstruction.
In near-field photometric stereo vision, the accurate calibration of the light source position directly affects the precision of the reconstruction results. Traditional calibration techniques rely on a highly reflective sphere and exploit the specular reflection properties at highlight points for determining light source positions. However, in real-world scenarios, nonuniform lighting conditions often leads to errors in extracting the sphere's image edges, affecting the accuracy of light source position calibration. Therefore, we propose a method for calibrating light source position based on a novel target. This method involves detecting planar reference points that are less sensitive to the lighting conditions and leveraging the pose relationship provided by the planar reference points to effectively overcome the adverse effects of lighting conditions on light source position calibration. Experimental results demonstrate that this method significantly enhances the precision of light source position calibration under non-uniform lighting conditions, eliminating the constraint of light source position calibration on specific lighting environments.
Fourier ptychographic microscopy (FPM) is a developed microscopic imaging technique which can break the limitation of the space-bandwidth-product and realize wide field of view, high-resolution and quantitative phase imaging. In recent years, application of deep learning in FPM effectively improved the imaging resolution of the amplitude and phase. However, these methods always require the construction of enormous datasets to train the network. To overcome the above problems, an untrained deep neural network (DNN) based on the physical model of FPM is proposed. Different from the traditional deep learning methods, this method aims to discard thousands of label data. According to the known frequency shift, the output images in DNN are virtually imaged with the forward model of FPM and a series of low-resolution images that have been simply fused are calculated to update the network by comparing errors between them and the experimental data. For a small reconstruction task, the proposed network can be treated as the iterative phase retrieval procedure, the amplitude and phase can be well retrieved with untrained parameters. The simulation results verify the feasibility of proposed physics-driven DNN. Compared with traditional deep learning methods, this method discard thousands of label data in the case of limited resolution loss.
2D raw image of Light Field (LF) camera needs to be decoded into 4D LF data for representation and processing. In decoding, the main lens is usually modeled as a thin lens while the micro-lens array is modeled as a pinhole array. Although this model takes into account the main lens distortion, it is still difficult to accurately characterize the complex imaging relationship of the LF camera. In order to obtain more accurate 4D LF data, this paper proposes a LF camera decoding method based on a two-plane ray model. By calibrating the ray corresponding to the pixel point of the LF camera, the mapping relationship between the real object point and the image point is established to obtain a new two- plane model of LF; then by interpolating and resampling on this two-plane, the correction of 4D LF data is achieved. Compared with traditional methods, our method improves the decoding accuracy of LF camera, and provides new ideas and methods for studying LF imaging.
A single-shot dual-wavelength lensless digital holography based on a dichroic mirror is presented to achieve quantitative phase imaging. The lensless digital holography is designed by a wavefront division transmission configuration with only a plane mirror and a beam splitter. By merely adding a dichroic mirror between plane mirror and beam splitter in lensless transmission digital holography, the propagation direction of two reference waves for different wavelengths can be adjusted separately by dichroic mirror and plane mirror. Therefore, a multiplexed hologram with different fringe directions for two wavelengths can be simultaneously obtained. Our technique is capable of real-time wavelength-multiplexing with minimum optical element and system modification.
In this paper, a three-dimensional (3D) depth sensing system based on active structured light field imaging (ALF) is proposed. In light field imaging, one of most commonly used method for depth estimation is based on its Epipolar Plane Image (EPI), in which the slope of line features is related to parallax and is inversely proportional to the depth of the measured object. However, it is difficult to extract the line features accurately only according to the captured texture information of the object, especially in the case of weak texture, repeated texture and noise. Therefore, active phase feature provided by a phase-shifting fringe projection is introduced for this system, with which the line features in EPI can be extracted by simply searching correspondence points with the same phase value. In order to obtain depth map with measuring accuracy, a metric calibration method is proposed to establish the quantitative relationship between the slope of lines and depth. Besides that, due to the existence of distortions in the light field camera (LFC), the correspondence points in EPI cannot fit well enough with linear distribution, another calibration based on the LFC imaging model and Bundle Adjustment (BA) was implemented to correct distortions in the EPI, which can reduce the fitting errors of line features. experiment results proved that calibration method described above is effective, and the built ALF system sensor can work well for 3D depth estimation.
Fourier ptychographic microscopy has the advantages of large field of view, high resolution and quantitative phase imaging, which magically compromises contradiction between the resolution and the field of view. In the traditional reconstruction process, the spectrum is always updated partly step by step, which would result in error accumulation. In order to improve the reconstruction precision, based on its working principle, the paper proposes a global iterative optimization method, which updates the spectrum holistically. And experimental results demonstrate its better performance and effectiveness.
Fourier ptychographic microscopy is a newly developed method to extend the resolution beyond the conventional limit defined by a microscope optics. The positions of the LED sources strongly determine the quality of the reconstructed result. In this paper, we propose a new positional misalignment correction method, which is based on the distribution of the incident LED intensity. When the LED matrix panel has displacements along x-axis, or y-axis, the incident LED intensity distribution which propagates to the sample plane will be changed. An optimization method to correct positional misalignment is introduced, as well as the light intensity correction. Simulation has been performed to verify the effectiveness of the proposed method, which demonstrates that the reconstructed result shows a better quality.
KEYWORDS: Clouds, 3D scanning, Image registration, 3D modeling, Calibration, 3D acquisition, Machine vision, Photogrammetry, Remote sensing, 3D image processing
In this paper, we propose a method of real-time point cloud registration for flexible hand-held 3D scanning. In this study, The problem of point cloud registration to be solved can be divided into refined registration and coarse registration with eight small or large overlap. The fine registration problem is solved by point-to-projection algorithm to ensure high efficiency. In addition, we solve the two types of coarse registration by exhaustive screening with different sampling means. To employ sampling screening algorithm, first we establish multiple matching relationships between two range image by using sampling point pairs, which are derived from the sampling sets of the respective 3D point clouds. Then we propose pose evaluation algorithm(PEA) inspired by ICP to screen out the most optimal matching relationship as the coarse registration result. In this case, we design PEA as a separate kernel function combined with GPU parallel technology to realize real-time computing. Back-projection calibration technology that robust for system distance error solve the problem of pose rejection criteria. The algorithm is highly versatile and robust, since the feature information of the 3D point cloud has never been utilized and extracted. The proposed method has been applied to our hand-held 3D scanners and has been tested on extensive real measured data to demonstrate the effectiveness.
At present, fringe projection profilometry has also been limited with a trade-off between speed and accuracy. For achieving high accuracy measurement, phase-shifting and phase-unwarpping operations will always be used for phase correspondence, however, the phase-unwrapping processing does not contribute to improve the phase accuracy, but just to distinguish phase steps. For futher reducing the projection pattern for phase-unwarpping, we propose a novel method for phase corresponding in bi-cameras system without phase unwrapping. Phase-to-3D mapping structures are utilized to obtain the candidate correspondences and eliminate the ambiguties with wrapped phase, which is implemented efficiently without time-comsuming phase correspondence searching. The experiments on both static and dynamic scenes are perfomed to verify its capability of 120 fps 3D reconstructing speed by overlapped using 3-step phase-shifting pattern.
Fourier ptychographic microscopy (FPM) is a new imaging technology developed in recent years, which can achieve a high-resolution imaging with large field of view (FOV). In FPM, the imaging retrieval quality depends on the exact position of the LED array, and there exists position errors of LED array in practice. To obtain the accurate position, this paper proposes a method of vision assisted localization to determine the coordinates of LED array, which can provide the accurate LED ray directions for improving FPM. Additionally, a multi-resolution reference is built to settle the inconsistent FOV between the FPM system and vision assisted system. The experiments are performed to illuminate the efficiency and capability of flexible application because of no LED array aligning considered.
The hand-eye system calibration, aiming to achieve the relationship between the robot hand and vision sensor mounted on it, is an important technique in the robot applications, involving automatic 3D measurement, visual serving, sensor placement planning, etc. Generally, the key issue of hand-eye calibration is equivalent to solving the homogeneous transformation matrix X from the equation of the form AX=XB. In this paper, we develop an accurate hand-eye calibration method by establishing a global objective function, in which the errors of camera calibration and robot movements have been considered. It is constructed based on the minimizing the projection error from the target benchmarks to the camera retina plane at all robot motions. The experimental results prove that the proposed algorithm can accurately solve the hand-eye calibration problem. Meanwhile, we set up an automatic 3D measurement system based on a robot and a rotary table, and developed a calibration scheme for the system to achieve the multi-view and fully automatic 3D data acquisition by using a fringe projection 3D sensor.
Portable 3D scanning systems are increasingly used in many applications at present as a result of its high flexibility, portability and high efficiency. Iterative closest points method is widely used for multi-view measurement results registration. However, there are many restrictions for portable system, the alignment often depends on landmarks on object surface or object features, in some applications, it may not achieve satisfactory expectations. In this paper, we propose to conduct the registration based on pose estimation from a low cost inertial sensor, which will increase the measurement effectiveness. Test result demonstrates that the method is feasible. With attitude information inside the system, the measurement device does not need external support information and has good prospects for application.
Nonlinear intensity response, namely gamma effect, of the projector-camera setup introduces phase error in phase-shifting profilometry. This paper presents a comparison of three phase error compensation methods: active, passive and adaptive, using a universal phase error model. The active method calibrates a gamma factor to modify the projected fringe patterns; the passive method implement an iterative procedure to work out an optimal phase map; the adaptive method compensate phase error based on Hilbert transform without any auxiliary conditions. Comparison Experiments were implemented in three and four phase-shifting steps, which demonstrated that the active method provided an excellent performance regardless the phase-shifting step, yet the passive method might fail when the phase error was large; the adaptive method could be in the same level as the passive method in four phase-shifting step.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.