We propose a multi-kHz Single-Photon Counting (SPC) space LIDAR, exploiting low energy pulses with high repetition frequency (PRF). The high PRF allows one to overcome the low signal limitations, as many return shots can be collected from nearly the same scattering area. The ALART space instrument exhibits a multi-beam design, providing height retrieval over a wide area and terrain slope measurements. This novel technique, working with low SNRs, allows multiple beam generation with a single laser, limiting mass and power consumption. As the receiver has a certain probability to detect multiple photons from different levels of canopy, a histogram is constructed and used to retrieve the properties of the target tree, by means of a modal decomposition of the reconstructed waveform. A field demonstrator of the ALART space instrument is currently being developed by a European consortium led by cosine | measurement systems and funded by ESA under the TRP program. The demonstrator requirements have been derived to be representative of the target instrument and it will be tested in an equipped tower in woodland areas in the Netherlands. The employed detectors are state-of-the-art CMOS Single-Photon Avalanche Diode (SPAD) matrices with 1024 pixels. Each pixel is independently equipped with an integrated Time-to-Digital Converter (TDC), achieving a timing accuracy that is much lower than the SPAD dead time, resulting in a distance resolution in the centimeter range. The instrument emits nanosecond laser pulses with energy on the order of several μJ, at a PRF of ~ 10 kHz, and projects on ground a three-beams pattern. An extensive field measurement campaign will validate the employed technologies and algorithms for vegetation height retrieval.
In this paper we investigate the determination of camera relative orientation in videos from time of flight (ToF) range imaging camera. The task of estimating the relative orientation is realized by fusion of range flow and optical flow constraints, which integrates the range and the intensity channels in a single framework. We demonstrate our approach on videos from a ToF camera involving camera translation and rotational motion and compare it with the ground truth data. Furthermore we distinguish camera motion from an independently moving object using a robust adjustment.
During the past decade, small-footprint full-waveform lidar systems have become increasingly available, especially airborne. The primary output of these systems is high-resolution topographic information in the form of three-dimensional point clouds over large areas. Recording the temporal profile of the transmitted laser pulse and of its echoes enables to detect more echoes per pulse than in the case of discrete-return lidar systems, resulting in a higher point density over complex terrain. Furthermore, full-waveform instruments also allow for retrieving radiometric information of the scanned surfaces, commonly as an amplitude value and an echo width stored together with the 3D coordinates of the single points. However, the radiometric information needs to be calibrated in order to merge datasets acquired at different altitudes and/or with different instruments, so that the radiometric information becomes an object property independent of the flight mission and instrument parameters. State-of-the-art radiometric calibration techniques for full-waveform lidar data are based on Gaussian Decomposition to overcome the ill-posedness of the inherent inversion problem, i.e. deconvolution. However, these approaches make strong assumptions on the temporal profile of the transmitted laser pulse and the physical properties of the scanned surfaces, represented by the differential backscatter cross-section. In this paper, we present a novel approach for radiometric calibration using uniform B-splines. This kind of functions allows for linear inversion without constraining the temporal shape of the modeled signals. The theoretical derivation is illustrated by examples recorded with a Riegl LMS-Q560 and an Optech ALTM 3100 system, respectively.
Range cameras and terrestrial laser scanners provide 3D geometric information by directly measuring the range
from the sensor to the object. Calibration of the ranging component has not been studied systematically yet,
and this paper provides a first overview. The proposed approaches differ in the object space features used for
calibration, the calibration models themselves, and possibly required environmental conditions. A number of
approaches are reviewed within this framework and discussed. For terrestrial laser scanners, improvement in
accuracy by a factor up to two is typical, whereas range camera calibration still lacks a proper model, and large
systematic errors typically remain.
This article concentrates on the integrated self-calibration of both the interior orientation and the distance
measurement system of a time-of-flght range camera (photonic mixer device). Unlike other approaches that
investigate individual distortion factors separately, in the presented approach all calculations are based on the
same data set that is captured without auxiliary devices serving as high-order reference, but with the camera being
guided by hand. Flat, circular targets stuck on a planar whiteboard and with known positions are automatically
tracked throughout the amplitude layer of long image sequences. These image observations are introduced into
a bundle block adjustment, which on the one hand results in the determination of the interior orientation.
Capitalizing the known planarity of the imaged board, the reconstructed exterior orientations furthermore allow
for the derivation of reference values of the actual distance observations. Eased by the automatic reconstruction
of the cameras trajectory and attitude, comprehensive statistics are generated, which are accumulated into a
5-dimensional matrix in order to be manageable. The marginal distributions of this matrix are inspected for the
purpose of system identification, whereupon its elements are introduced into another least-squares adjustment,
finally leading to clear range correction models and parameters.
Conference Committee Involvement (7)
Videometrics, Range Imaging, and Applications
26 June 2017 | Munich, Germany
Videometrics, Range Imaging, and Applications XIII
22 June 2015 | Munich, Germany
Videometrics, Range Imaging, and Applications XII
14 May 2013 | Munich, Germany
Videometrics, Range Imaging, and Applications
25 May 2011 | Munich, Germany
3D Imaging Metrology
24 January 2011 | San Francisco Airport, California, United States
Videometrics, Range Imaging, and Applications X
2 August 2009 | San Diego, California, United States
3D Imaging Metrology
19 January 2009 | San Jose, California, United States