Time-correlated single-photon-counting (TCSPC) lidar provides very high resolution range measurements. This makes the technology interesting for three-dimensional imaging of complex scenes with targets behind foliage or other obscurations. TCSPC is a statistical method that demands integration of multiple measurements toward the same area to resolve objects at different distances within the instantaneous field-of-view. Point-by-point scanning will demand significant overhead for the movement, increasing the measurement time. Here, the effect of continuously scanning the scene row-by-row is investigated and signal processing methods to transform this into low-noise point clouds are described. The methods are illustrated using measurements of a characterization target and an oak and hazel copse. Steps between different surfaces of less than 5 cm in range are resolved as two surfaces.
The purpose of this study is to present and evaluate the benefit and capabilities of high resolution 3D data from unmanned aircraft, especially in conditions where existing methods (passive imaging, 3D photogrammetry) have limited capability. Examples of applications are detection of obscured objects under vegetation, change detection, detection in dark or shadowed environments, and an immediate geometric documentation of an area of interest. Applications are exemplified with experimental data from our small UAV test platform 3DUAV with an integrated rotating laser scanner, and with ground truth data collected with a terrestrial laser scanner. We process lidar data combined with inertial navigation system (INS) data for generation of a highly accurate point cloud. The combination of INS and lidar data is achieved in a dynamic calibration process that compensates for the navigation errors from the lowcost and light-weight MEMS based (microelectromechanical systems) INS. This system allows for studies of the whole data collection-processing-application chain and also serves as a platform for further development. We evaluate the applications in relation to system aspects such as survey time, resolution and target detection capabilities. Our results indicate that several target detection/classification scenarios are feasible within reasonable survey times from a few minutes (cars, persons and larger objects) to about 30 minutes for detection and possibly recognition of smaller targets.
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution
and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor
on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and
recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over
the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing
parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the
accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height
accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E
lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point
cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with
lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the
navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch,
roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based
(microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved
accuracy compared to processing based solely on INS data.
The detection and classification of small surface targets at long ranges is a growing need for naval security. This paper
will present an overview of a measurement campaign which took place in the Baltic Sea in November 2014. The purpose
was to test active and passive EO sensors (10 different types) for the detection, tracking and identification of small sea
targets. The passive sensors were covering the visual, SWIR, MWIR and LWIR regions. Active sensors operating at 1.5
μm collected data in 1D, 2D and 3D modes. Supplementary sensors included a weather station, a scintillometer, as well
as sensors for positioning and attitude determination of the boats.
Three boats in the class 4-9 meters were used as targets. After registration of the boats at close range they were sent out
to 5-7 km distance from the sensor site. At the different ranges the target boats were directed to have different aspect angles
relative to the direction of observation.
Staff from IOSB Fraunhofer in Germany and from Selex (through DSTL) in UK took part in the tests beside FOI who
was arranging the trials. A summary of the trial and examples of data and imagery will be presented.
This paper summarizes on-going work on 3D sensing and imaging for unmanned aerial vehicles UAV carried laser sensors. We study sensor concepts, UAVs suitable for carrying the sensors, and signal processing for mapping and target detection applications. We also perform user studies together with the Swedish armed forces, to evaluate usage in their mission cycle and interviews to clarify how to present data.
Two ladar sensor concepts for mounting in UAV are studied. The discussion is based on known performance in commercial ladar systems today and predicted performance in future UAV applications. The small UAV is equipped with a short-range scanning ladar. The system is aimed for quick situational analysis of small areas and for documentation of a situation. The large UAV is equipped with a high-performing photon counting ladar with matrix detector. Its purpose is to support large-area surveillance, intelligence and mapping operations. Based on these sensors and their performance, signal and image processing support for data analysis is analyzed. Generated data amounts are estimated and demands on data storage capacity and data transfer is analyzed.
We have tested the usage of 3D mapping together with military rangers. We tested to use 3D mapping in the planning phase and as last-minute intelligence update of the target. Feedback from these tests will be presented. We are performing interviews with various military professions, to get better understanding of how 3D data are used and interpreted. We discuss approaches of how to present data from 3D imaging sensor for a user.
Some results from a low light trial in Porton Down UK are described. The purpose was to compare imaging performance for active and passive sensors in the visible, NIR, SWIR, MWIR and LWIR bands concerning detection and identification of humans carrying certain handheld objects and performing associated activities. This paper will concentrate on results from active and passive NIR and SWIR only. Both NIR and SWIR sensors provided passive imagery down to illumination levels between 1-10 lux corresponding to sunset-overcast to moonlight. The active mode gave usable imagery out to 2-3 km at much lower light levels. NIR and SWIR sensor images are compared concerning target to background contrast, cloth recognition and the detection of humans, activities and handheld objects. The target to background contrast was often somewhat better in the SWIR as compared with the NIR wavelength region. The contrast between different types of clothing was in general more discriminative in the NIR vs the SWIR. This was especially true for the active sensing modes. The recognition of large weapons could be done out to 600-1000 m range and handguns out to the 300-600 meter range. We found that activities could be detected and recognized out to 1400 m at least, but depends on the contrast between the person the background.
Small UAV:s (Unmanned Aerial Vehicles) are currently in an explosive technical development phase. The performance
of UAV-system components such as inertial navigation sensors, propulsion, control processors and algorithms are
gradually improving. Simultaneously, lidar technologies are continuously developing in terms of reliability, accuracy, as
well as speed of data collection, storage and processing. The lidar development towards miniature systems with high data
rates has, together with recent UAV development, a great potential for new three dimensional (3D) mapping capabilities.
Compared to lidar mapping from manned full-size aircraft a small unmanned aircraft can be cost efficient over small
areas and more flexible for deployment. An advantage with high resolution lidar compared to 3D mapping from passive
(multi angle) photogrammetry is the ability to penetrate through vegetation and detect partially obscured targets. Another
advantage is the ability to obtain 3D data over the whole survey area, without the limited performance of passive
photogrammetry in low contrast areas. The purpose of our work is to demonstrate 3D lidar mapping capability from a
small multirotor UAV. We present the first experimental results and the mechanical and electrical integration of the
Velodyne HDL-32E lidar on a six-rotor aircraft with a total weight of 7 kg. The rotating lidar is mounted at an angle of
20 degrees from the horizontal plane giving a vertical field-of-view of 10-50 degrees below the horizon in the aircraft
forward directions. For absolute positioning of the 3D data, accurate positioning and orientation of the lidar sensor is of
high importance. We evaluate the lidar data position accuracy both based on inertial navigation system (INS) data, and
on INS data combined with lidar data. The INS sensors consist of accelerometers, gyroscopes, GPS, magnetometers, and
a pressure sensor for altimetry. The lidar range resolution and accuracy is documented as well as the capability for target
surface reflectivity estimation based on measurements on calibration standards. Initial results of the general mapping
capability including the detection through partly obscured environments is demonstrated through field data collection
An improvised explosive device (IED) is a bomb constructed and deployed in a non-standard manor. Improvised means that the bomb maker took whatever he could get his hands on, making it very hard to predict and detect. Nevertheless, the matters in which the IED’s are deployed and used, for example as roadside bombs, follow certain patterns. One possible approach for early warning is to record the surroundings when it is safe and use this as reference data for change detection. In this paper a LADAR-based system for IED detection is presented. The idea is to measure the area in front of the vehicle when driving and comparing this to the previously recorded reference data. By detecting new, missing or changed objects the system can make the driver aware of probable threats.
There exist several tools and methods for camera resectioning, i.e. geometric calibration for the purpose of estimating intrinsic and extrinsic parameters. The intrinsic parameters represent the internal properties of the camera such as focal length, principal point and distortion coefficients. The extrinsic parameters relate the cameras position to the world, i.e. how is the camera positioned and oriented in the world. With both sets of parameters known it is possible to relate a pixel in one camera to the world or to another camera. This is important in many applications, for example in stereo vision. The existing methods work well for standard visual cameras in most situations. Intrinsic parameters are usually estimated by imaging a well-defined pattern from different angles and distances. Checkerboard patterns are very often used for calibration since it is a well-defined pattern with easily detectable features. The intersections between the black and white squares form high contrast points which can be estimated with sub pixel accuracy. Knowing the precise dimension and structure of the pattern makes enables calculation of the intrinsic parameters. Extrinsic calibration can be performed in a similar manner if the exact position and orientation of the pattern is known. A common method is to distribute markers in the scene and to measure their exact locations. The key to good calibration is well-defined points and accurate measurements. Thermal cameras are a subset of infrared cameras that work with long wavelengths, usually between 9 and 14 microns. At these wavelengths all objects above absolute zero temperature emit radiation making it ideal for passive imaging in complete darkness and widely used in military applications. The issue that arises when trying to perform a geometric calibration of a thermal camera is that the checkerboard emits more or less the same amount of radiation in the black squares as in the white. In other words, the calibration board that is optimal for calibration of visual cameras might be completely useless for thermal cameras. A calibration board for thermal cameras should ideally be a checkerboard with high contrast in thermal wavelengths. (It is of course possible to use other sorts of objects or patterns but since most tools and software expect a checkerboard pattern this is by far the most straightforward solution.) Depending on the application it should also be more or less portable and work booth in indoor and outdoor scenarios. In this paper we present several years of experience with calibration of thermal cameras in various scenarios. Checkerboards with high contrast both for indoor and outdoor scenarios are presented as well as different markers suitable for extrinsic calibration.
This paper describes a data collection on passive and active imaging and the preliminary analysis. It is part of an ongoing work on active and passive imaging for target identification using different wavelength bands. We focus on data collection at NIR-SWIR wavelengths but we also include the visible and the thermal region. Active imaging in NIRSWIR will support the passive imaging by eliminating shadows during day-time and allow night operation. Among the applications that are most likely for active multispectral imaging, we focus on long range human target identification. We also study the combination of active and passive sensing. The target scenarios of interest include persons carrying different objects and their associated activities. We investigated laser imaging for target detection and classification up to 1 km assuming that another cueing sensor – passive EO and/or radar – is available for target acquisition and detection. Broadband or multispectral operation will reduce the effects of target speckle and atmospheric turbulence. Longer wavelengths will improve performance in low visibility conditions due to haze, clouds and fog. We are currently performing indoor and outdoor tests to further investigate the target/background phenomena that are emphasized in these wavelengths. We also investigate how these effects can be used for target identification and image fusion. Performed field tests and the results of preliminary data analysis are reported.
One of the main threats for armed forces in conflict areas are attacks by improvised explosive devices (IED). After
an IED attack a forensic investigation of the site is undertaken. In many ways military forensic work is similar to the
civilian counterpart. There are the same needs to acquire evidence in the crime scene, such as fingerprints, DNA, and
samples of the remains of the IED. Photos have to be taken and the geometry of the location shall be measured,
preferably in 3D. A main difference between the military and the civilian forensic work is the time slot available for
the scene investigation. The military must work under the threat of fire assault, e.g. snipers. The short time slot puts
great demands on the forensic team and the equipment they use. We have done performance measurements of the
Mantis-Vision F5 sensor and evaluated the usefulness in military forensic applications. This paper will describe
some applications and show possibilities and also limitations of using a handheld laser imaging sensor for military
For use in the development of synthetic environment models, the Swedish Defence Research Agency (FOI) bought two laser scanners in the beginning of this millennium, one from the Austrian company Riegl and one from the Canadian Optech. This was the start for over a decade of use of commercial laser range sensors at FOI. The laser scanners have so far been used for different applications such as point cloud algorithm development (detection, classification and reconnaissance of targets), phenomenology studies, reflectance measurements and environment and ground truth measurements. This paper presents different laser scanner technologies (pulsed Time of Flight, Continues Wave (CW), Flash and distributed light) and compare advantages and limitations of the technologies. The paper also include some examples of the use of laser scanning in applications and presents methods for laser range sensors performance evaluation used and developed at FOI.
The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active
imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising
military applications, system analyses, a roadmap and recommendations.
Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the
sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn,
active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows
observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage,
camouflage...) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to
performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result
in new capabilities.
We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long
range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat
detection. We present the system analyses that have been performed for confirming the interests, limitations and
requirements of spectral active imaging in these three prioritized applications.
This paper investigates the prospects of "seeing around the corner" using active imaging. A monostatic active imaging
system offers interesting capabilities in the presence of glossy reflecting objects. Examples of such surfaces are windows
in buildings and cars, calm water, signs and vehicle surfaces. During daylight it might well be possible to use mirrorlike
reflection by the naked eye or a CCD camera for non-line of sight imaging. However the advantage with active imaging
is that one controls the illumination. This will not only allow for low light and night utilization but also for use in cases
where the sun or other interfering lights limit the non-line of sight imaging possibility. The range resolution obtained by
time gating will reduce disturbing direct reflections and allow simultaneous view in several directions using range
Measurements and theoretical considerations in this report support the idea of using laser to "see around the corner".
Examples of images and reflectivity measurements will be presented together with examples of potential system
A Bayesian approach for data reduction based on spatial filtering is proposed that enables detection of targets partly occluded by natural forest. The framework aims at creating a synergy between terrain mapping and target detection. It is demonstrates how spatial features can be extracted and combined in order to detect target samples in cluttered environments. In particular, it is illustrated how a priori scene information and assumptions about targets can be translated into algorithms for feature extraction. We also analyze the coupling between features and assumptions because it gives knowledge about which features are general enough to be useful in other environments and which are tailored for a specific situation. Two types of features are identified, nontarget indicators and target indicators. The filtering approach is based on a combination of several features. A theoretical framework for combining the features into a maximum likelihood classification scheme is presented. The approach is evaluated using data collected with a laser-based 3-D sensor in various forest environments with vehicles as targets. Over 70% of the target points are detected at a false-alarm rate of <1%. We also demonstrate how selecting different feature subsets influence the results.
This paper describes the development of a high resolution waveform recording laser scanner and presents results
obtained with the system. When collecting 3-D data on small objects, high range and transverse resolution is needed. In
particular, if the objects are partly occluded by sparse materials such as vegetation, multiple returns from a single laser
pulse may limit the image quality. The ability to resolve multiple echoes depends mainly on the laser pulse width and the
receiver bandwidth. With the purpose to achieve high range resolution for multiple returns, we have developed a high
performance 3-D LIDAR, called HiPer, with a short pulse fibre laser (500 ps), fast detectors (70 ps rise time) and a 20
GS/s oscilloscope for fast sampling. HiPer can acquire the full waveform, which can be used for off-line processing. This
paper will describe the LIDAR system and present some image examples. The signal processing will also be described,
with some examples from the off-line processing and the benefit of using the complete waveform.
This paper will describe ongoing work from an EDA initiated study on Active Imaging with emphasis of using multi or
broadband spectral lasers and receivers. Present laser based imaging and mapping systems are mostly based on a fixed
frequency lasers. On the other hand great progress has recently occurred in passive multi- and hyperspectral imaging
with applications ranging from environmental monitoring and geology to mapping, military surveillance, and
reconnaissance. Data bases on spectral signatures allow the possibility to discriminate between different materials in the
scene. Present multi- and hyperspectral sensors mainly operate in the visible and short wavelength region (0.4-2.5 μm)
and rely on the solar radiation giving shortcoming due to shadows, clouds, illumination angles and lack of night
operation. Active spectral imaging however will largely overcome these difficulties by a complete control of the
illumination. Active illumination enables spectral night and low-light operation beside a robust way of obtaining
polarization and high resolution 2D/3D information.
Recent development of broadband lasers and advanced imaging 3D focal plane arrays has led to new opportunities for
advanced spectral and polarization imaging with high range resolution. Fusing the knowledge of ladar and passive
spectral imaging will result in new capabilities in the field of
EO-sensing to be shown in the study. We will present an
overview of technology, systems and applications for active spectral imaging and propose future activities in connection
with some prioritized applications.
In many laser radar systems the intensity value's strength is dependent on the reflectivity of the measured surface.
High intensity values are necessary for accurate range measurements. When measuring a low-reflectivity surface the
returning laser intensity will be low. This, in turn, results in high uncertainty in the range estimate. In this paper, an
approach to correct the intensity values are presented which results in more accurate range estimates even on lowreflecting
Examples are shown using the ASC (Advanced Scientific Concepts) FLASH 3D LADAR sensor. During a data
collection several series of measurements characterizing the shape of the return laser pulse from surfaces with
known reflectivity were performed. Using principal components analysis (PCA), variation in pulse shape as a
function of laser intensity was determined. These data were then used to identify a parametric model that described
the variation in intensity values relative to the surface's reflectivity. Based on the model, an approach to
systematically adjust intensity values has been developed. In this paper we will also examine laser timing jitter,
intensity, and their effects on range estimation.
The development of new asymmetric threats to civilian and naval ships has been a relatively recent occurrence. The
bombing of the USS Cole is one example and the pirate activities outside Somalia another. There is a need to recognize
targets at long ranges and possibly also their intentions to prepare for counteractions. Eye safe laser imaging at 1.5 μm
offers target recognition at long ranges during day and night. The 1.5 μm wavelength is suitable for observing small
targets at the sea surface such as boats and swimmers due to the low reflectivity of water compared to potential targets.
Turbulence and haze limits the sensor performance and their influence is estimated for some cases of operational
interest. For comparison, passive EO images have been recorded with the same camera to investigate the difference
between sun illuminated and laser illuminated images. Examples of laser images will be given for a variety of targets
and external conditions.Image segmentation for future automated recognition development is described and
examplified. Examples of relevant 1.5 μm laser reflectivities of small naval targets are also presented. Finally a
discussion of system aspects is made.
As a part of the project agreement between the Swedish Defence Research Agency (FOI) and the United States of
American's Air Force Research Laboratory (AFRL), a joint field trial was performed in Sweden during two weeks in
January 2009. The main purpose for this trial was to characterize AFRL's latest version of the ASC (Advanced Scientific
Concepts ) FLASH 3D LADAR sensor. The measurements were performed essentially in FOI´s optical hall whose
100 m indoor range offers measurements under controlled conditions minimizing effects such as atmospheric turbulence.
Data were also acquired outdoor in both forest and urban scenarios, using vehicles and humans as targets, with the
purpose of acquiring data from more dynamic platforms to assist in further algorithm development. This paper shows
examples of the acquired data and presents initial results.
In this paper we study the potential of using deconvolution techniques on full-waveform laser radar data for pulse
detection in cluttered environments, e.g. when a land-mine is partly occluded by vegetation. A pulse width greater than
the distance between the reflecting surfaces within the footprint results in a signal that is composed by overlapping
reflections that may be very difficult to analyze successfully with standard pulse detection techniques. We demonstrate
that deconvolution improves the chance of successful decomposition of waveform signals into the components
corresponding to the reflecting objects in the path of the laser beam. Experimental data were analyzed in terms of pulse
extraction capability and distance accuracy. It was found that deconvolution increases the pulse extraction performance,
but that surfaces closer than about 40% of the laser pulse width are still very difficult to detect and that the number of
spurious, erroneously extracted, points is the price to pay for increased pulse detection probability.
One of the major advantages with laser sensors compared to passive optronic sensors, is the capability to penetrate
sparse vegetation. Therefore, the most limiting performance issue is the portion of laser "shots" being absorbed by the
foliage. This issue is the main focus in this paper and an analysis of the effect of forest vegetation of Nordic type is
presented. The conclusions are based on laser scanner measurement as well as photos. While the analysis covers several
elevation angles, the evaluation focuses on ground-to-ground measurements.
This article describes measurements of snow reflection using laser radar. There seem to be few publications on this subject. This article reports measurements on reflection from different kinds of snow, including the angular reflection properties. Reflectance information obtained from two commercial scanning laser radars working at the wavelengths 0.9 and 1.5 µm is shown and discussed. Data are mainly be presented at the eye-safe wavelength 1.5 µm, but some measurements were also performed at 0.9 µm. We have measured snow reflection during a part of a winter season that gave us opportunities to investigate different types of snow and different meteorological conditions. The reflection values tend to decrease during the first couple of hours after a snowfall. The structure of the snow seems to be more important for the reflection than its age. In general the reflection at 1.5 µm is rather low; the reflectivity can vary between 0.5% and 10% for oblique incidence, depending on the structure of the snow, which in turn depends on its age, the air temperature, the humidity, etc. The reflectivity at the 0.9-µm wavelength is much higher: more than 50% for fresh snow. Images of snow-covered scenes are shown together with reflection data, including bidirectional reflectance distribution functions.
The objective of this paper is to present the Swedish land mine and UXO detection project "Multi Optical Mine Detection System", MOMS. The goal for MOMS is to provide knowledge and competence for fast detection of mines, especially surface laid mines. The first phase, with duration 2005-2009, is essentially a feasibility study which focuses on the possibilities and limitations of a multi-sensor system with both active and passive EO-sensors. Sensor concepts used, in different combinations or single, includes 3-D imaging, retro reflection detection, multi-spectral imaging, thermal imaging, polarization and fluorescence. The aim of the MOMS project is presented and research and investigations carried out during the first years will be described.
In this paper, we present techniques related to registration and change detection using 3D laser radar data. First, an experimental evaluation of a number of registration techniques based on the Iterative Closest Point algorithm is presented. As an extension, an approach for removing noisy points prior to the registration process by keypoint detection is also proposed. Since the success of accurate registration is typically dependent on a satisfactorily accurate starting estimate, coarse registration is an important functionality. We address this problem by proposing an approach for coarse 2D registration, which is based on detecting vertical structures (e.g. trees) in the point sets and then finding the transformation that gives the best alignment. Furthermore, a change detection approach based on voxelization of the registered data sets is presented. The 3D space is partitioned into a cell grid and a number of features for each cell are computed. Cells for which features have changed significantly (statistical outliers) then correspond to significant changes.
As a part of the Swedish mine detection project MOMS, an initial field trial was conducted at the Swedish EOD and
Demining Centre (SWEDEC). The purpose was to collect data on surface-laid mines, UXO, submunitions, IED's, and
background with a variety of optical sensors, for further use in the project. Three terrain types were covered: forest,
gravel road, and an area which had recovered after total removal of all vegetation some years before. The sensors used in
the field trial included UV, VIS, and NIR sensors as well as thermal, multi-spectral, and hyper-spectral sensors, 3-D laser
radar and polarization sensors. Some of the sensors were mounted on an aerial work platform, while others were placed
on tripods on the ground. This paper describes the field trial and the presents some initial results obtained from the
Rapid and efficient detection of surface mines, IED's (Improvised Explosive Devices) and UXO (Unexploded Ordnance) is of high priority in military conflicts. High range resolution laser radars combined with passive hyper/multispectral sensors offer an interesting concept to help solving this problem. This paper reports on laser radar data collection of various surface mines in different types of terrain.
In order to evaluate the capability of 3D imaging for detecting and classifying the objects of interest a scanning laser radar was used to scan mines and surrounding terrain with high angular and range resolution. These data were then fed into a laser radar model capable of generating range waveforms for a variety of system parameters and combinations of different targets and backgrounds. We can thus simulate a potential system by down sampling to relevant pixel sizes and laser/receiver characteristics. Data, simulations and examples will be presented.
One of the more exciting capabilities foreseen for future 3-D imaging laser radars is to see through vegetation and camouflage nettings. We have used ground based and airborne scanning laser radars to collect data of various types of terrain and vegetation. On some occasions reference targets were used to collect data on reflectivity and to evaluate penetration. The data contains reflectivity and range distributions and were collected at 1.5 and 1.06 μm wavelength with range accuracies in the 1-10 cm range. The seasonal variations for different types of vegetation have been studied. A preliminary evaluation of part of the data set was recently presented at another SPIE conference. Since then the data have been analyzed in more detail with emphasis on testing algorithms and future system performance by simulation of different sensors and scenarios. Evaluation methods will be discussed and some examples of data sets will be presented.
This paper will describe measurements of snow reflection using laser radar. There seems to be a rather limited number
of publications on snow reflection related to laser radar, which is why we decided to investigate a little more details of
snow reflection including that from different kinds of snow as well as the angular reflection properties.
We will discuss reflectance information obtained by two commercial scanning laser radars using the wavelengths
0.9 μm and 1.5 μm. Data will mainly be presented at the eye safe wavelength 1.5 μm but some measurements were also
performed for the wavelength 0.9 μm. We have measured snow reflection during a part of a winter season which gave
us opportunities to investigate different types of snow and different meteorological conditions.
The reflection values tend to decrease during the first couple of hours after a snowfall. The snow structure seems to be
more important for the reflection than the snow age. In general the snow reflection at 1.5 μm is rather low and the
reflectivity values can vary between 0.5 and 10 % for oblique incidence depending on snow structure which in turn
depends on age, air temperature, humidity etc. The snow reflectivity at the 0.9 μm laser wavelength is much higher,
more than 50 % for fresh snow. Images of snow covered scenes will be shown together with reflection data including
Exciting development is taking place in 3 D sensing laser radars. Scanning systems are well established for mapping from airborne and ground sensors. 3 D sensing focal plane arrays (FPAs) enable a full range and intensity image can be captured in one laser shot. Gated viewing systems also produces 3 D target information. Many applications for 3 D laser radars are found in robotics, rapid terrain visualization, augmented vision, reconnaissance and target recognition, weapon guidance including aim point selection and others. The net centric warfare will demand high resolution geo-data for a common description of the environment. At FOI we have a measurement program to collect data relevant for 3 D laser radars using airborne and tripod mounted equipment for data collection. Data collection spans from single pixel waveform collection (1 D) over 2 D using range gated imaging to full 3 D imaging using scanning systems. This paper will describe 3 D laser data from different campaigns with emphasis on range distribution and reflections properties for targets and background during different seasonal conditions. Example of the use of the data for system modeling, performance prediction and algorithm development will be given. Different metrics to characterize the data set will also be discussed.
This paper presents our ongoing research activities on target recognition from data generated by 3-D imaging laser radar. In particular, we focus on future full flash imaging 3-D sensors. Several techniques for laser range imaging are applied for modelling and simulation of data from this kind of 3-D sensor systems. Firstly, data from an experimental gated viewing system is used. Processed data from this system is useful in assisting an operator in the target recognition task. Our recent work on target identification at long ranges, using range data from the gated viewing system, provides techniques to handle turbulence, platform motion and illumination variances from scintillation and speckle noise. Moreover, the range data is expanded into 3-D by using a gating technique that provides reconstruction of the target surface structure. This is shown at distances out to 7 km. Secondly, 3-D target data is achieved at short ranges by using different scanning laser radar systems. This provides high-resolution 3-D data from scanning a target from one single view. However, several scans from multiple viewing angles can also quite easily be merged for more detailed target representations. This is, for example, very useful for recognizing targets in vegetation. Hereby, we achieve simulated 3-D sensor data from both short and long
ranges (100 meters out to 7 km) at various spatial resolutions. Thirdly, real data from the 3-D flash imaging system by US Air Force Research Lab (AFRL/SNJM), Wright Patterson Air Force Base, has recently been made available to FOI and also used as input in the development of aided target recognition methods. High-resolution 3-D target models are used in the identification process and compared to the 3-D target data (point cloud) from the various laser radar systems. Finally, we give some examples from our work that clearly show that future 3-D laser radar systems in cooperation with signal- and image analysis techniques have a great potential in the non-cooperative target recognition task and will provide several new and interesting capabilities, for example, to reveal targets hidden in vegetation.
This paper wil give an overview of 3D laser sensing and related activities at the Swedish Defence Research Agency (FOI) in the view of system needs and applications. Our activites include data collection of laser signatures for target and backgrounds at various wavelengths. We will give examples of such measurements. The results are used in building sythetic environments, modellin of laser radar systems and as training sets for development of algorithms for target recognition and weapon applications. Present work on rapid environment assessment includes the use of data from airborne laser for terrain mapping and depth sounding. Methods for automatic target detection and object classification (buildings, trees, man-made objects etc.) have been developed together with techniques for visualisation. This will be described in more detail in a separate paper. The ability to find and correctly identify "difficult" targets, being either at very long ranges, hidden in the vegetation, behind windows or under camouflage, is one of the top priorities for any military force. Example of such work will be given using range gated imagery and 3D scanning laser radars. Different kinds of signal processing approaches have been studied and will be presented more in two separate papers. We have also developed modeling tools for both 2D and 3D laser imaging. Finally we will discuss the use of 3D laser radars in some system applications in the light of new component technology, processing needs and sensor fusion.