Publisher’s Note: This paper, originally published on 17 September 2019, was replaced with a corrected/revised version on 17 February 2021. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
Optical 3D measurement using active pattern projection is well known for its high precision and high 3D point density. Recently, increasing the reconstruction frame rate and the number of active sensors in a simultaneous and continuous operation regime used for sensor networks has become more important. Traditionally, light modulators such as LCoS, DMD, or GOBO (GOes Before Optics) have been used, which generate the projected pattern by blocking the light at dark areas of the pattern. In order to further increase the measurement speed and/or the number of time-sequential continuously active sensors, brighter light sources must be chosen to achieve sufficient short exposure times. Alternatively, as we show in this paper, a more efficient pattern modulator can be used. By applying an optical freeform element to generate an aperiodic sinusoidal fringe pattern, up to 100 % of the available light can be utilized. In our prototype, we show how to employ a freeform element moved in a linear bearing to create a compact low-cost, high-speed projection unit. Furthermore, to reduce the computational burden in processing numerous simultaneous image streams, we have implemented the rectification step of the 3D reconstruction pipeline into the field programmable gate array (FPGA) sensor module. Both approaches enable us to use structured light sensors for continuous high-speed 3D measurement tasks for industrial quality control. The presented prototype utilizes a single irritation-free near-infrared (NIR) LED to illuminate and reconstruct within a measurement field of approximately 300 mm × 300 mm at a measurement distance of 500 mm.
Lens undistortion and image rectification is a commonly used pre-processing, e.g. for active or passive stereo vision to reduce the complexity of the search for matching points. The undistortion and rectification is implemented in a field programmable gate array (FPGA). The algorithm is performed pixel by pixel. The challenges of the implementation are the synchronisation of the data streams and the limited memory bandwidth. Due to the memory constraints, the algorithm utilises a pre-computed lossy compression of the rectification maps by a ratio of eight. The compressed maps occupy less space by ignoring the pixel indexes, sub-sampling both maps, and reducing repeated information in a row by forming differences to adjacent pixels. Undistorted and rectified images are calculated once without and once with the compressed transformation map. The deviation between the different computed images is minimal and negligible. The functionality of the hardware module, the decompression algorithm and the processing pipeline are described. The algorithm is validated on a Xilinx Zynq-7020 SoC. The stereo setup has a baseline with 46 mm and non-converged optical axis between the cameras. The cameras are configured at 1.3 Mpix @ 60 fps and distortion correction and rectification is performed in real time during image capture. With a camera resolution of 1280 pixels × 960 pixels and a maximum vertical shift of ± 20 pixels, the efficient hardware implementation utilizes 12 % of available block RAM resources.
This paper proposes an architecture for a phase measuring profilometry system, that can be efficiently implemented on a Xilinx Zynq-7000 SoC. After a brief system overview, the paper starts at the very beginning point of such a task, that is the camera calibration. A calibration procedure using OpenCV functions is outlined and the calculation of compressed rectification maps is described in more detail. The compressed rectification maps are used for lens undistortion and rectification to reduce the memory load. The hardware accelerated part of the system comprises the image acquisition, the lens undistortion and image rectification, the phase accumulation with following phase unwrapping, the phase matching and the 3D reconstruction. For phase unwrapping a multi-frequency approach is used that can be easily implemented on the given architecture. The interfacing of the hardware modules follows a fully pipelined implementation scheme so that the image processing can be done in real time.