Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks.
In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed.
The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.
The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or
danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with
LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data
distortion of most LiDAR systems.
The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis,
point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor.
As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly
volatile and rapid changes in the direction of motion the object is kept in the field of view.
The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances
(20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the
detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a
boat or an UAV in various distances.
The detection and classification of small surface targets at long ranges is a growing need for naval security. This paper
will present an overview of a measurement campaign which took place in the Baltic Sea in November 2014. The purpose
was to test active and passive EO sensors (10 different types) for the detection, tracking and identification of small sea
targets. The passive sensors were covering the visual, SWIR, MWIR and LWIR regions. Active sensors operating at 1.5
μm collected data in 1D, 2D and 3D modes. Supplementary sensors included a weather station, a scintillometer, as well
as sensors for positioning and attitude determination of the boats.
Three boats in the class 4-9 meters were used as targets. After registration of the boats at close range they were sent out
to 5-7 km distance from the sensor site. At the different ranges the target boats were directed to have different aspect angles
relative to the direction of observation.
Staff from IOSB Fraunhofer in Germany and from Selex (through DSTL) in UK took part in the tests beside FOI who
was arranging the trials. A summary of the trial and examples of data and imagery will be presented.
The growing interest in unmanned surface vehicles, accident avoidance for naval vessels and automated maritime surveillance leads to a growing need for automatic detection, classification and pose estimation of maritime objects in medium and long ranges. Laser radar imagery is a well proven tool for near to medium range, but up to now for higher distances neither the sensor range nor the sensor resolution was satisfying. As a result of the mentioned limitations of laser radar imagery the potential of laser illuminated gated viewing for automated classification and pose estimation was investigated. The paper presents new techniques for segmentation, pose estimation and model-based identification of naval vessels in gated viewing imagery in comparison with the corresponding results of long range data acquired with a focal plane array laser radar system. The pose estimation in the gated viewing data is directly connected with the model-based identification which makes use of the outline of the object. By setting a sufficient narrow gate, the distance gap between the upper part of the ship and the background leads to an automatic segmentation. By setting the gate the distance to the object is roughly known. With this distance and the imaging properties of the camera, the width of the object perpendicular to the line of sight can be calculated. For each ship in the model library a set of possible 2D appearances in the known distance is calculated and the resulting contours are compared with the measured 2D outline. The result is a match error for each reasonable orientation of each model of the library. The result gained from the gated viewing data is compared with the results of target identification by laser radar imagery of the same maritime objects.
Automatic change detection in 3D environments requires the comparison of multi-temporal data. By comparing current
data with past data of the same area, changes can be automatically detected and identified. Volumetric changes in the scene
hint at suspicious activities like the movement of military vehicles, the application of camouflage nets, or the placement
of IEDs, etc. In contrast to broad research activities in remote sensing with optical cameras, this paper addresses the topic
using 3D data acquired by mobile laser scanning (MLS). We present a framework for immediate comparison of current
MLS data to given 3D reference data. Our method extends the concept of occupancy grids known from robot mapping,
which incorporates the sensor positions in the processing of the 3D point clouds. This allows extracting the information
that is included in the data acquisition geometry. For each single range measurement, it becomes apparent that an object
reflects laser pulses in the measured range distance, i.e., space is occupied at that 3D position. In addition, it is obvious
that space is empty along the line of sight between sensor and the reflecting object. Everywhere else, the occupancy of
space remains unknown. This approach handles occlusions and changes implicitly, such that the latter are identifiable by
conflicts of empty space and occupied space. The presented concept of change detection has been successfully validated
in experiments with recorded MLS data streams. Results are shown for test sites at which MLS data were acquired at
different time intervals.
Today, the civil market provides quite a number of different 3D-Sensors covering ranges up to 1 km. Typically these
sensors are based on single element detectors which suffer from the drawback of spatial resolution at larger distances.
Tasks demanding reliable object classification at long ranges can be fulfilled only by sensors consisting of detector
arrays. They ensure sufficient frame rates and high spatial resolution. Worldwide there are many efforts in developing
3D-detectors, based on two-dimensional arrays.
This paper presents first results on the performance of a recently developed 3D imaging laser radar sensor, working in
the short wave infrared (SWIR) at 1.5 μm. It consists of a novel Cadmium Mercury Telluride (CMT) linear array APD
detector with 384x1 elements at a pitch of 25 μm, developed by AIM Infrarot Module GmbH. The APD elements are
designed to work in the linear (non-Geiger) mode. Each pixel will provide the time of flight measurement, and, due to
the linear detection mode, allowing the detection of three successive echoes. The resolution in depth is 15 cm, the
maximum repetition rate is 4 kHz. We discuss various sensor concepts regarding possible applications and their
dependence on system parameters like field of view, frame rate, spatial resolution and range of operation.
The paper presents new techniques for automatic segmentation, classification, and generic pose estimation of ships and
boats in laser radar imagery. Segmentation, which primarily involves elimination of water reflections, is based on
modeling surface waves and comparing the expected water reflection signature to the ladar intensity image. Shape
classification matches a parametric shape representation of a generic ship hull with parameters extracted from the range
image. The extracted parameter vector defines an instance of a geometric 3D model which can be registered with the
range image for precision pose estimation. Results show that reliable automatic acquisition, aim point selection and realtime
tracking of maritime targets is feasible even for erratic sensor and target motions, temporary occlusions, and evasive
The paper presents new techniques and processing results for automatic segmentation, shape classification, generic pose
estimation, and model-based identification of naval vessels in laser radar imagery. The special characteristics of focal
plane array laser radar systems such as multiple reflections and intensity-dependent range measurements are incorporated
into the algorithms. The proposed 3D model matching technique is probabilistic, based on the range error distribution,
correspondence errors, the detection probability of potentially visible model points and false alarm errors. The match
algorithm is robust against incomplete and inaccurate models, each model having been generated semi-automatically
from a single range image. A classification accuracy of about 96% was attained, using a maritime database with over
8000 flash laser radar images of 146 ships at various ranges and orientations together with a model library of 46 vessels.
Applications include military maritime reconnaissance, coastal surveillance, harbor security and anti-piracy operations.