In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream.<p> </p> The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.
Range imagery provided by time-of-flight (TOF) cameras has been shown to be useful to facilitate robot navigation
in several applications. Visual navigation for autonomous pipeline inspection robots is a special case of
such a task, where the cramped operating environment influences the range measurements in a detrimental way.
Inherent in the imaging system are also several defects that will lead to a smearing of range measurements. This
paper sketches an approach for using TOF cameras as a visual navigation aid in pipelines, and addresses the
challenges concerning the inherent defects in the imaging system and the impact of the operating environment.
New results on our previously proposed strategy for detecting and tracking possible landmarks and obstacles in pipelines are presented. We consider an explicit model for correcting lens distortions, and use this to explain why the cylindrical pipe is perceived as a cone. A simplified model, which implicitly handles the combined effects of the environment and the camera on the measured ranges by adjusting for the conical shape, is used to map the robot's environment into an along-axis-view relative to the pipe, which facilitates obstacle traversal. Experiments using a model pipeline and a prototype camera rig are presented.
Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.
Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious
manual work and save labor costs. One such application involves parts that arrive with random position and orientation
on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a
system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D
template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric
primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct
solution. The method has been demonstrated for localization of car parts with random position and orientation. We
believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D
objects in a scene is needed.
Recently, Range Imaging (RIM) cameras have become available that capture high resolution range images at video
rate. Such cameras measure the distance from the scene for each pixel independently based upon a measured time of
flight (TOF). Some cameras, such as the SwissRanger(tm) SR-3000, measure the TOF based on the phase shift of reflected
light from a modulated light source. Such cameras are shown to be susceptible to severe distortions in the measured
range due to light scattering within the lens and camera. Earlier work induced using a simplified Gaussian point spread
function and inverse filtering to compensate for such distortions. In this work a method is proposed for how to identify
and use generally shaped empirical models for the point spread function to get a more accurate compensation. The
otherwise difficult inverse problem is solved by using the forward model iteratively, according to well established
procedures from image restoration. Each iteration is done as a sequential process, starting with the brightest parts of the
image and then moving sequentially to the least bright parts, with each step subtracting the estimated effects from the
measurements. This approach gives a faster and more reliable compensation convergence. An average reduction of the
error by more than 60% is demonstrated on real images. The computation load corresponds to one or two convolutions of
the measured complex image with a real filter of the same size as the image.
A flexible and highly configurable 3D vision system targeted for in-line product inspection is presented. The system includes a low cost 3D camera based on structured light and a set of flexible software tools that automate the measurement process. The specification of the measurement tasks is done in a first manual step. The user selects regions of the point cloud to analyze and specifies primitives to be characterized within these regions. After all measurement tasks have been specified, measurements can be carried out on successive parts automatically and without supervision. As a test case, a measurement cell for inspection of a V-shaped car component has been developed. The car component consists of two steel tubes attached to a central hub. Each of the tubes has an additional bushing clamped to its end. A measurement is performed in a few seconds and results in an ordered point cloud with 1.2 million points. The software is configured to fit cylinders to each of the steel tubes as well as to the inside of the bushings of the car part. The size, position and orientation of the fitted cylinders allow us to measure and verify a series of dimensions specified on the CAD drawing of the component with sub-millimetre accuracy.