In this paper, we examine crosstalk effects that can arise in multi-LiDAR configurations, and we show a data-based approach to mitigate these effects. Due to the ability to acquire precise 3D data of the environment, LiDAR-based sensor systems (sensors based on “Light Detection and Ranging”, e.g., laser scanners) increasingly find their way into various applications, e.g. in the automotive sector. However, with an increasing number of LiDAR sensors operating within close vicinity, the problem of potential crosstalk between these devices arises. “Crosstalk” outlines the following effect: In a typical LiDAR-based sensor, short laser pulses are emitted into the scene and the distance between sensor and object is derived from the time measured until an “echo” is received. In case multiple laser pulses of the same wavelength are emitted at the same time, the detector may not be able to distinguish between correct and false matches of laser pulses and echoes, resulting in erroneous range measurements and 3D points. During operation of our own multi-LiDAR sensor system, we were able to observe crosstalk effects in the acquired data. Having compared different spatial filtering approaches for the elimination of erroneous points in the 3D data, we propose a data-based spatio-temporal filtering and show its results, which may be sufficient depending on the application. However, technical solutions are desired for future LiDAR sensors.
The number of reported incidents caused by small UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. LiDAR sensors (e.g., laser scanners) are well known to be adequate sensors for object detection and tracking.<p> </p> In this paper, we expand our existing LiDAR-based approach for the tracking and detection of (low) flying small objects like commercial mini/micro UAVs. We show that UAVs can be detected by the proposed methods, as long as the movements of the UAVs correspond to the LiDAR sensor’s capabilities in scanning performance, range and resolution. The trajectory of the tracked object can further be analyzed to support the classification, meaning that UAVs and non- UAV objects can be distinguished by an identification of typical movement patterns. A stable tracking of the UAV is achieved by a precise prediction of its movement. In addition to this precise prediction of the target’s position, the object detection, tracking and classification have to be achieved in real-time. <p> </p>For the algorithm development and a performance analysis, we analyzed LiDAR data that we acquired during a field trial. Several different mini/micro UAVs were observed by a system of four 360° LiDAR sensors mounted to a car. Using this specific sensor system, the results show that UAVs can be detected and tracked by the proposed methods, allowing a protection of the car against UAV threats within a radius of up to 35 m.
The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. LiDAR systems are well known to be adequate sensors for object detection and tracking. In contrast to the detection of pedestrians or cars in traffic scenarios, the challenges of UAV detection lie in the small size, the various shapes and materials, and in the high speed and volatility of their movement. Due to the small size of the object and the limited sensor resolution, a UAV can hardly be detected in a single frame. It rather has to be spotted by its motion in the scene. In this paper, we present a fast approach for the tracking and detection of (low) flying small objects like commercial mini/micro UAVs. Unlike with the typical sequence -track-after-detect-, we start with looking for clues by finding minor 3D details in the 360° LiDAR scans of scene. If these clues are detectable in consecutive scans (possibly including a movement), the probability for the actual detection of a UAV is rising. For the algorithm development and a performance analysis, we collected data during a field trial with several different UAV types and several different sensor types (acoustic, radar, EO/IR, LiDAR). The results show that UAVs can be detected by the proposed methods, as long as the movements of the UAVs correspond to the LiDAR sensor’s capabilities in scanning performance, range and resolution. Based on data collected during the field trial, the paper shows first results of this analysis.
Fusion of information in heterogeneous multi-modal sensor networks has been proven to enhance sensing capabilities of ground troops to detect and track small unmanned aerial vehicles flying at low altitude. Nevertheless, the area coverage of a static sensor network could be permanently or temporally impacted by geographic topologies or moving obstacles which could reduce the local sensing probabilities. An additional moving sensor platform can be used to temporarily enhance sensing capabilities. First theoretical analysis and experimental field trials are presented using a static sensor network consisting of acoustical antenna array, a stationary FMCW RADAR and a passive/active optical sensor unit. Additionally, a measurement vehicle was applied, equipped with passive/active optical sensing devices. While the sensor network was used to monitor a stationary area with a sensor dependent sensing coverage, the measurement vehicle was used to obtain additional information outside the sensing range of the network or behind obstacles. A fusion of these data sets can provide an increased situational awareness. Limitations and improvements of this approach are discussed.
Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks.<p> </p> In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed.<p> </p> The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.
The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or
danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with
LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data
distortion of most LiDAR systems.
The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis,
point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor.
As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly
volatile and rapid changes in the direction of motion the object is kept in the field of view.
The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances
(20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the
detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a
boat or an UAV in various distances.
The detection and classification of small surface targets at long ranges is a growing need for naval security. This paper
will present an overview of a measurement campaign which took place in the Baltic Sea in November 2014. The purpose
was to test active and passive EO sensors (10 different types) for the detection, tracking and identification of small sea
targets. The passive sensors were covering the visual, SWIR, MWIR and LWIR regions. Active sensors operating at 1.5
μm collected data in 1D, 2D and 3D modes. Supplementary sensors included a weather station, a scintillometer, as well
as sensors for positioning and attitude determination of the boats.
Three boats in the class 4-9 meters were used as targets. After registration of the boats at close range they were sent out
to 5-7 km distance from the sensor site. At the different ranges the target boats were directed to have different aspect angles
relative to the direction of observation.
Staff from IOSB Fraunhofer in Germany and from Selex (through DSTL) in UK took part in the tests beside FOI who
was arranging the trials. A summary of the trial and examples of data and imagery will be presented.
The growing interest in unmanned surface vehicles, accident avoidance for naval vessels and automated maritime surveillance leads to a growing need for automatic detection, classification and pose estimation of maritime objects in medium and long ranges. Laser radar imagery is a well proven tool for near to medium range, but up to now for higher distances neither the sensor range nor the sensor resolution was satisfying. As a result of the mentioned limitations of laser radar imagery the potential of laser illuminated gated viewing for automated classification and pose estimation was investigated. The paper presents new techniques for segmentation, pose estimation and model-based identification of naval vessels in gated viewing imagery in comparison with the corresponding results of long range data acquired with a focal plane array laser radar system. The pose estimation in the gated viewing data is directly connected with the model-based identification which makes use of the outline of the object. By setting a sufficient narrow gate, the distance gap between the upper part of the ship and the background leads to an automatic segmentation. By setting the gate the distance to the object is roughly known. With this distance and the imaging properties of the camera, the width of the object perpendicular to the line of sight can be calculated. For each ship in the model library a set of possible 2D appearances in the known distance is calculated and the resulting contours are compared with the measured 2D outline. The result is a match error for each reasonable orientation of each model of the library. The result gained from the gated viewing data is compared with the results of target identification by laser radar imagery of the same maritime objects.
Automatic change detection in 3D environments requires the comparison of multi-temporal data. By comparing current
data with past data of the same area, changes can be automatically detected and identified. Volumetric changes in the scene
hint at suspicious activities like the movement of military vehicles, the application of camouflage nets, or the placement
of IEDs, etc. In contrast to broad research activities in remote sensing with optical cameras, this paper addresses the topic
using 3D data acquired by mobile laser scanning (MLS). We present a framework for immediate comparison of current
MLS data to given 3D reference data. Our method extends the concept of occupancy grids known from robot mapping,
which incorporates the sensor positions in the processing of the 3D point clouds. This allows extracting the information
that is included in the data acquisition geometry. For each single range measurement, it becomes apparent that an object
reflects laser pulses in the measured range distance, i.e., space is occupied at that 3D position. In addition, it is obvious
that space is empty along the line of sight between sensor and the reflecting object. Everywhere else, the occupancy of
space remains unknown. This approach handles occlusions and changes implicitly, such that the latter are identifiable by
conflicts of empty space and occupied space. The presented concept of change detection has been successfully validated
in experiments with recorded MLS data streams. Results are shown for test sites at which MLS data were acquired at
different time intervals.
Today, the civil market provides quite a number of different 3D-Sensors covering ranges up to 1 km. Typically these
sensors are based on single element detectors which suffer from the drawback of spatial resolution at larger distances.
Tasks demanding reliable object classification at long ranges can be fulfilled only by sensors consisting of detector
arrays. They ensure sufficient frame rates and high spatial resolution. Worldwide there are many efforts in developing
3D-detectors, based on two-dimensional arrays.
This paper presents first results on the performance of a recently developed 3D imaging laser radar sensor, working in
the short wave infrared (SWIR) at 1.5 μm. It consists of a novel Cadmium Mercury Telluride (CMT) linear array APD
detector with 384x1 elements at a pitch of 25 μm, developed by AIM Infrarot Module GmbH. The APD elements are
designed to work in the linear (non-Geiger) mode. Each pixel will provide the time of flight measurement, and, due to
the linear detection mode, allowing the detection of three successive echoes. The resolution in depth is 15 cm, the
maximum repetition rate is 4 kHz. We discuss various sensor concepts regarding possible applications and their
dependence on system parameters like field of view, frame rate, spatial resolution and range of operation.
The paper presents new techniques for automatic segmentation, classification, and generic pose estimation of ships and
boats in laser radar imagery. Segmentation, which primarily involves elimination of water reflections, is based on
modeling surface waves and comparing the expected water reflection signature to the ladar intensity image. Shape
classification matches a parametric shape representation of a generic ship hull with parameters extracted from the range
image. The extracted parameter vector defines an instance of a geometric 3D model which can be registered with the
range image for precision pose estimation. Results show that reliable automatic acquisition, aim point selection and realtime
tracking of maritime targets is feasible even for erratic sensor and target motions, temporary occlusions, and evasive
The paper presents new techniques and processing results for automatic segmentation, shape classification, generic pose
estimation, and model-based identification of naval vessels in laser radar imagery. The special characteristics of focal
plane array laser radar systems such as multiple reflections and intensity-dependent range measurements are incorporated
into the algorithms. The proposed 3D model matching technique is probabilistic, based on the range error distribution,
correspondence errors, the detection probability of potentially visible model points and false alarm errors. The match
algorithm is robust against incomplete and inaccurate models, each model having been generated semi-automatically
from a single range image. A classification accuracy of about 96% was attained, using a maritime database with over
8000 flash laser radar images of 146 ships at various ranges and orientations together with a model library of 46 vessels.
Applications include military maritime reconnaissance, coastal surveillance, harbor security and anti-piracy operations.