Range imagers provide useful information for part inspection, robot control, or human safety applications in industrial environments. However, some applications may require more information than range data from a single viewpoint. Therefore, multiple range images must be combined to create a three-dimensional representation of the
scene. Although simple in its principle, this operation is not straightforward to implement in industrial systems, since each range image is affected by noise. In this paper, we present two specific applications where merging of range images must be performed. We use the same processing pipeline for both applications : conversion from
range image to point clouds, elimination of degrees of freedom between different clouds, validation of the merged results. Nevertheless, each step in this pipeline requires dedicated algorithms for our example applications. The first application is high resolution inspection of large parts, where many range images are acquired sequentially and merged in a post-processing step, allowing to create a virtual model of the part observed, typically larger than the instrument's field of view. The key requirement in this application is high accuracy for the merging of multiple point clouds. The second application discussed is human safety in a human/robot environment: range images are used to ensure that no human is present in the robot’s zone of operation, and can trigger the robot's emergency shutdown when needed. In this case, range image merging is required to avoid uncertainties due to occlusions. The key requirement here is real-time operation, namely the merging operation should not introduce a significant latency in the data processing pipeline. For both application cases, the improvements brought by
merging multiple range images are clearly illustrated.
Industrial inspection of micro-devices is often a very challenging task, especially when those devices are produced
in large quantities using micro-fabrication techniques. In the case of microlenses, millions of lenses are produced
on the same substrate, thus forming a dense array. In this article, we investigate a possible automation of
the microlens array inspection process. First, two image processing methods are considered and compared:
reference subtraction and blob analysis. The criteria chosen to compare them are the reliability of the defect
detection, the processing time required, as well as the sensitivity to image acquisition conditions, such as varying
illumination and focus. Tests performed on a real-world database of microlens array images led to select the blob
analysis method. Based on the selected method, an automated inspection software module was then successfully
implemented. Its good performance allows to dramatically reduce the inspection time as well as the human
intervention in the inspection process.
Recent time-of-flight (TOF) cameras allow for real-time acquisition of range maps with good performance.
However, the accuracy of the measured range map may be limited by secondary light reflections. Specifically,
the range measurement is affected by scattering, which consists in parasitic signals caused by multiple reflections inside the camera device. Scattering, which is particularly strong in scenes with large aspect ratios, must be detected and the errors compensated. This paper considers reducing scattering errors by means of image processing methods applied to the output image from the time-of-flight camera. It shows that scattering reduction can be expressed as a deconvolution problem on a complex, two-dimensional signal. The paper investigated several solutions. First, a comparison of image domain and Fourier domain processing for scattering compensation is provided. One key element in the comparison is the computation load and the requirement to perform scattering compensation in real-time. Then, the paper discusses strategies for improved scattering reduction. More specifically, it treats the problem of optimizing the description of the inverse filter for best scattering compensation results. Finally, the validity of the proposed scattering reduction method is verified on various examples of indoor scenes.
The paper provides considerations relative to the application of 3D vision methods and presents some lessons learnt in
this respect by presenting four 3D vision tasks and discussing the selection of vision sensing devices meant to solving the
task. After a short reminder of 3D vision methods of interest for optical range imaging for microvision and macrovision
applications, the paper enumerates and comments some aspects which contribute to find a good solution. Then, it
presents and discusses the four following tasks: 3D sensing for people surveillance, measurement of stamping burrs,
sorting burred stamping parts and finally, hole filling algorithm.
Machine vision plays an important role in automated assembly. However, present vision systems are not adequate for robot control in an assembly environment where individual components have sizes in the range of 1 to 100 micrometers, since current systems do not provide sufficient resolution in the whole workspace when they are fixed, and they are too bulky to be brought close enough to the components. A small-size 3D vision system is expected to provide two decisive advantages: high accuracy and high flexibility. The presented work aims to develop a 3D vision sensor easily embedded in a micro-assembly robot. The paper starts by a screening of 3D sensing methods, performed in order to identify the best candidates for miniaturization, and that results in the selection of the <i>multifocus</i> principle (which elegantly avoids the depth of field problem encountered for example in stereo vision). Here, depth is measured by determination of sharpness maxima in a stack of images acquired at different elevations. Then, it presents a preliminary system configuration, that delivers images of a 1300×1000 micrometers field of view with lateral resolution better than 5 micrometers and vertical resolution better than 20 micrometers. Finally, future steps in development of a real-time embedded multifocus sensor are presented, with a discussion of the most critical tradeoffs.