KEYWORDS: Single photon detectors, LIDAR, Sensors, Detection and tracking algorithms, Expectation maximization algorithms, Signal detection, Target detection, Signal to noise ratio, Optical engineering, Signal processing
Time-correlated single-photon counting lidar provides very high-resolution range measurements, making the technology interesting for 3D imaging of objects behind foliage or other obscuration. We study six peak detection approaches and compare their performance from several perspectives: detection of double surfaces within the instantaneous field of view, range accuracy, performance under sparse sampling, and the number of outliers. The results presented are based on reference measurements of a characterization target. Special consideration is given to the possibility of resolving two surfaces closely separated in range within the field of view of a single pixel. An approach based on fitting a linear combination of impulse response functions to the collected data showed the best overall performance.
Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network – a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness.
Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc.
The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.
Time-correlated single-photon-counting (TCSPC) lidar provides very high resolution range measurements. This makes the technology interesting for three-dimensional imaging of complex scenes with targets behind foliage or other obscurations. TCSPC is a statistical method that demands integration of multiple measurements toward the same area to resolve objects at different distances within the instantaneous field-of-view. Point-by-point scanning will demand significant overhead for the movement, increasing the measurement time. Here, the effect of continuously scanning the scene row-by-row is investigated and signal processing methods to transform this into low-noise point clouds are described. The methods are illustrated using measurements of a characterization target and an oak and hazel copse. Steps between different surfaces of less than 5 cm in range are resolved as two surfaces.
The purpose of this study is to present and evaluate the benefit and capabilities of high resolution 3D data from unmanned aircraft, especially in conditions where existing methods (passive imaging, 3D photogrammetry) have limited capability. Examples of applications are detection of obscured objects under vegetation, change detection, detection in dark or shadowed environments, and an immediate geometric documentation of an area of interest. Applications are exemplified with experimental data from our small UAV test platform 3DUAV with an integrated rotating laser scanner, and with ground truth data collected with a terrestrial laser scanner. We process lidar data combined with inertial navigation system (INS) data for generation of a highly accurate point cloud. The combination of INS and lidar data is achieved in a dynamic calibration process that compensates for the navigation errors from the lowcost and light-weight MEMS based (microelectromechanical systems) INS. This system allows for studies of the whole data collection-processing-application chain and also serves as a platform for further development. We evaluate the applications in relation to system aspects such as survey time, resolution and target detection capabilities. Our results indicate that several target detection/classification scenarios are feasible within reasonable survey times from a few minutes (cars, persons and larger objects) to about 30 minutes for detection and possibly recognition of smaller targets.
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution
and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor
on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and
recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over
the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing
parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the
accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height
accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E
lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point
cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with
lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the
navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch,
roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based
(microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved
accuracy compared to processing based solely on INS data.
This paper summarizes on-going work on 3D sensing and imaging for unmanned aerial vehicles UAV carried laser sensors. We study sensor concepts, UAVs suitable for carrying the sensors, and signal processing for mapping and target detection applications. We also perform user studies together with the Swedish armed forces, to evaluate usage in their mission cycle and interviews to clarify how to present data.
Two ladar sensor concepts for mounting in UAV are studied. The discussion is based on known performance in commercial ladar systems today and predicted performance in future UAV applications. The small UAV is equipped with a short-range scanning ladar. The system is aimed for quick situational analysis of small areas and for documentation of a situation. The large UAV is equipped with a high-performing photon counting ladar with matrix detector. Its purpose is to support large-area surveillance, intelligence and mapping operations. Based on these sensors and their performance, signal and image processing support for data analysis is analyzed. Generated data amounts are estimated and demands on data storage capacity and data transfer is analyzed.
We have tested the usage of 3D mapping together with military rangers. We tested to use 3D mapping in the planning phase and as last-minute intelligence update of the target. Feedback from these tests will be presented. We are performing interviews with various military professions, to get better understanding of how 3D data are used and interpreted. We discuss approaches of how to present data from 3D imaging sensor for a user.
We present algorithm evaluations for ATR of small sea vessels. The targets are at km distance from the sensors, which
means that the algorithms have to deal with images affected by turbulence and mirage phenomena. We evaluate
previously developed algorithms for registration of 3D-generating laser radar data. The evaluations indicate that some
robustness to turbulence and mirage induced uncertainties can be handled by our probabilistic-based registration method.
We also assess methods for target classification and target recognition on these new 3D data.
An algorithm for detecting moving vessels in infrared image sequences is presented; it is based on optical flow
estimation. Detection of moving target with an unknown spectral signature in a maritime environment is a challenging
problem due to camera motion, background clutter, turbulence and the presence of mirage. First, the optical flow caused
by the camera motion is eliminated by estimating the global flow in the image. Second, connected regions containing
significant motions that differ from camera motion is extracted. It is assumed that motion caused by a moving vessel is
more temporally stable than motion caused by mirage or turbulence. Furthermore, it is assumed that the motion caused
by the vessel is more homogenous with respect to both magnitude and orientation, than motion caused by mirage and
turbulence. Sufficiently large connected regions with a flow of acceptable magnitude and orientation are considered
target regions. The method is evaluated on newly collected sequences of SWIR and MWIR images, with varying targets,
target ranges and background clutter.
Finally we discuss a concept for combining passive and active imaging in an ATR process. The main steps are passive
imaging for target detection, active imaging for target/background segmentation and a fusion of passive and active
imaging for target recognition.
Simultaneous localization and mapping (SLAM) is a well-known positioning approach in GPS-denied environments
such as urban canyons and inside buildings. Autonomous/aided target detection and recognition (ATR) is commonly
used in military application to detect threats and targets in outdoor environments. This papers present approaches to
combine SLAM with ATR in ways that compensate for the drawbacks in each method. The methods use physical objects
that are recognizable by ATR as unambiguous features in SLAM, while SLAM provides the ATR with better position
estimates. Landmarks in the form of 3D point features based on normal aligned radial features (NARF) are used in
conjunction with identified objects and 3D object models that replace landmarks when possible. This leads to a more
compact map representation with fewer landmarks, which partly compensates for the introduced cost of the ATR.
We analyze three approaches to combine SLAM and 3D-data; point-point matching ignoring NARF features, point-point
matching using the set of points that are selected by NARF feature analysis, and matching of NARF features using
nearest neighbor analysis. The first two approaches are is similar to the common iterative closest point (ICP). We
propose an algorithm that combines EKF-SLAM and ATR based on rectangle estimation. The intended application is to
improve the positioning of a first responder moving through an indoor environment, where the map offers localization
and simultaneously helps locate people, furniture and potentially dangerous objects such as gas canisters.
The detection and classification of small surface targets at long ranges is a growing need for naval security. Simulations of a laser radar at 1.5 μm aimed for search, detect, and recognition of small maritime targets will be discussed. The data for the laser radar system will be based on present and realistic future technology. The simulated data generate signal waveforms for every pixel in the sensor field-of-view. From these we can also generate two-dimensional (2-D) and three-dimensional (3-D) range and intensity images. The simulations will incorporate typical target movements at different sea states, vessel courses, effects of the atmospheric turbulence and also include different beam jitter. The laser pulse energy, repetition rate as well as the receiver and detector parameters have been the same during the simulations. We have also used a high resolution (sub centimeter) laser radar based on time correlated single photon counting to acquire examples of range profiles from different small model ships. The collected waveforms are compared with simulated wave forms based on 3-D models of the ships. A discussion of the classification potential based on information in 1-D, 2-D, and 3-D data separately and in combination is made versus different environmental conditions and system parameters.
In the electro-optical sensors and processing in urban operations (ESUO) study we pave the way for the European Defence Agency (EDA) group of Electro-Optics experts (IAP03) for a common understanding of the optimal distribution of processing functions between the different platforms. Combinations of local, distributed and centralized processing are proposed. In this way one can match processing functionality to the required power, and available communication systems data rates, to obtain the desired reaction times. In the study, three priority scenarios were defined. For these scenarios, present-day and future sensors and signal processing technologies were studied. The priority scenarios were camp protection, patrol and house search. A method for analyzing information quality in single and multi-sensor systems has been applied. A method for estimating reaction times for transmission of data through the chain of command has been proposed and used. These methods are documented and can be used to modify scenarios, or be applied to other scenarios. Present day data processing is organized mainly locally. Very limited exchange of information with other platforms is present; this is performed mainly at a high information level. Main issues that arose from the analysis of present-day systems and methodology are the slow reaction time due to the limited field of view of present-day sensors and the lack of robust automated processing. Efficient handover schemes between wide and narrow field of view sensors may however reduce the delay times. The main effort in the study was in forecasting the signal processing of EO-sensors in the next ten to twenty years. Distributed processing is proposed between hand-held and vehicle based sensors. This can be accompanied by cloud processing on board several vehicles. Additionally, to perform sensor fusion on sensor data originating from different platforms, and making full use of UAV imagery, a combination of distributed and centralized processing is essential. There is a central role for sensor fusion of heterogeneous sensors in future processing. The changes that occur in the urban operations of the future due to the application of these new technologies will be the improved quality of information, with shorter reaction time, and with lower operator load.
This paper describes a data collection on passive and active imaging and the preliminary analysis. It is part of an ongoing work on active and passive imaging for target identification using different wavelength bands. We focus on data collection at NIR-SWIR wavelengths but we also include the visible and the thermal region. Active imaging in NIRSWIR will support the passive imaging by eliminating shadows during day-time and allow night operation. Among the applications that are most likely for active multispectral imaging, we focus on long range human target identification. We also study the combination of active and passive sensing. The target scenarios of interest include persons carrying different objects and their associated activities. We investigated laser imaging for target detection and classification up to 1 km assuming that another cueing sensor – passive EO and/or radar – is available for target acquisition and detection. Broadband or multispectral operation will reduce the effects of target speckle and atmospheric turbulence. Longer wavelengths will improve performance in low visibility conditions due to haze, clouds and fog. We are currently performing indoor and outdoor tests to further investigate the target/background phenomena that are emphasized in these wavelengths. We also investigate how these effects can be used for target identification and image fusion. Performed field tests and the results of preliminary data analysis are reported.
The detection and classification of small surface targets at long ranges is a growing need for naval security. This paper
will discuss simulations of a laser radar at 1.5 μm aimed for search, detect and recognition of small maritime targets.
The data for the laser radar system will be based on present and realistic future technology.
The simulations will incorporate typical target movements at different sea states, vessel courses, effects of the
atmosphere and for given laser system parameters also include different beam jitter. The laser pulse energy, repetition
rate as well as the receiver and detector parameters have not been changed during the simulations.
A discussion of the classification potential based on information in 1D, 2D and 3D data separately and in combination
will be made vs. different environmental conditions and system parameters. System issues when combining the laser
radar with IR/TV and a range-Doppler radar will also be commented.
Tomographic signal processing is used to transform multiple one-dimensional range profiles of a target from different
angles to a two-dimensional image of the object. The range profiles are measured by a time-correlated single-photon
counting (TCSPC) laser radar system with approximately 50 ps range resolution and a field of view that is wide
compared to the measured objects. Measurements were performed in a lab environment with the targets mounted on a rotation stage. We show successful reconstruction of 2D-projections along the rotation axis of a boat model and removal of artefacts using a mask based on the convex hull. The independence of spatial resolution and the high sensitivity at a first glance makes this an interesting technology for very long range identification of passing objects such as high altitude UAVs and orbiting satellites but also the opposite problem of ship identification from high altitude platforms. To obtain an image with useful information measurements from a large angular sector around the object is needed, which is hard to obtain in practice. Examples of reconstructions using 90 and 150° sectors are given. In addition, the projection of the final image is along the rotation axis for the measurement and if this is not aligned with a major axis of the target the image information is limited. There are also practical problems to solve, for example that the distance from the sensor to the rotation centre needs to be known with an accuracy corresponding to the measurement resolution. The conclusion is that that laser radar tomography is useful only when the sensor is fixed and the target rotates around its own axis.
The new generation of laser-based imaging sensors enables collection of range images at video rate at the expense of
somewhat low spatial and range resolution. Combining several successive range images, instead of having to analyze
each image separately, is a way to improve the performance of feature extraction and target classification. In the robotics
community, occupancy grids are commonly used as a framework for combining sensor readings into a representation
that indicates passable (free) and non-passable (occupied) parts of the environment. In this paper we demonstrate how
3D occupancy grids can be used for outlier removal, registration quality assessment and measuring the degree of
unexplored space around a target, which may improve target detection and classification. Examples using data from a
maritime scene, acquired with a 3D FLASH sensor, are shown.
The new generation laser-based FLASH 3D imaging sensors enable data collection at video rate. This opens up for realtime
data analysis but also set demands on the signal processing. In this paper the possibilities and challenges with this
new data type are discussed. The commonly used focal plane array based detectors produce range estimates that vary
with the target's surface reflectance and target range, and our experience is that the built-in signal processing may not
compensate fully for that. We propose a simple adjustment that can be used even if some sensor parameters are not
known. The cost for the instantaneous image collection is, compared to scanning laser radar systems, lower range accuracy.
By gathering range information from several frames the geometrical information of the target can be obtained. We
also present an approach of how range data can be used to remove foreground clutter in front of a target. Further, we
illustrate how range data enables target classification in near
real-time and that the results can be improved if several
frames are co-registered. Examples using data from forest and maritime scenes are shown.
Personnel positioning is important for safety in e.g. emergency response operations. In GPS-denied environments,
possible positioning solutions include systems based on radio frequency communication, inertial sensors, and cameras.
Many camera-based systems create a map and localize themselves relative to that. The computational complexity of
most such solutions grows rapidly with the size of the map. One way to reduce the complexity is to divide the visited
region into submaps. This paper presents a novel method for merging conditionally independent submaps (generated
using e.g. EKF-SLAM) by the use of smoothing. Using this approach it is possible to build large maps in close to linear
time. The method is demonstrated in two indoor scenarios, where data was collected with a trolley-mounted stereo vision
A Bayesian approach for data reduction based on spatial filtering is proposed that enables detection of targets partly occluded by natural forest. The framework aims at creating a synergy between terrain mapping and target detection. It is demonstrates how spatial features can be extracted and combined in order to detect target samples in cluttered environments. In particular, it is illustrated how a priori scene information and assumptions about targets can be translated into algorithms for feature extraction. We also analyze the coupling between features and assumptions because it gives knowledge about which features are general enough to be useful in other environments and which are tailored for a specific situation. Two types of features are identified, nontarget indicators and target indicators. The filtering approach is based on a combination of several features. A theoretical framework for combining the features into a maximum likelihood classification scheme is presented. The approach is evaluated using data collected with a laser-based 3-D sensor in various forest environments with vehicles as targets. Over 70% of the target points are detected at a false-alarm rate of <1%. We also demonstrate how selecting different feature subsets influence the results.
Laser-based 3D sensors measure range with high accuracy and allow for detection of objects behind various type of
occlusion, e.g., tree canopies. Range information is valuable for detection of small objects that are typically represented
by 5-10 pixels in the data set. Range information is also valuable in tracking problems when the tracked object is
occluded under parts of its movement and when there are several objects in the scene. In this paper, on-going work on
detection and tracking are presented. Detection of partly occluded vehicles is discussed. To detect partly occluded
objects we take advantage of the range information for removing foreground clutter. The target detection approach is
based on geometric features, for example local surface detection, shadow analysis and height-based detection. Initial
results on tracking of humans are also presented. The benefits with range information are discussed. Results are
illustrated using outdoor measurements with a 3D FLASH LADAR sensor and a 3D scanning LADAR.
In several laser radar applications detection of targets with high resolution and range accuracy, is of importance. Time-of-flight time-correlated single-photon counting (TCSPC) provides a method to accomplish range profiling at longer ranges with high accuracy and resolution. The performance of a TCSPC system used for optical range profiling suffers from the influence of atmospheric turbulence effects causing perturbations of the registered time histograms. This is mostly manifested in propagation paths close to the ground. In this work a TCSPC system based on a monostatic transmitter/receiver head, a picosecond laser operating at high pulse-repetition frequency, a single photon detector and acquisition electronics with high timing resolution was used to study the influence from atmospheric turbulence on registered pulse responses from test targets. The turbulence conditions were monitored during the experiments and the influence from turbulence effects on the pulse response are discussed. The experimental results are considered in relation to existing turbulence models. Implications on system performance for a TCSPC time-of-flight range profiling system are illustrated.
Laser-based 3D sensors measure range with high accuracy and allow for detection of several reflecting surfaces for each
emitted laser pulse. This makes them particularly suitable for sensing objects behind various types of occlusion, e.g.
camouflage nets and tree canopies. Nevertheless, automatic detection and recognition of targets in forested areas is a
challenging research problem, especially since foreground objects often cause targets to appear as fragmented.
In this paper we propose a sequential approach for detection and recognition of man-made objects in natural forest
environments using data from laser-based 3D sensors. First, ground samples and samples too far above the ground (that
cannot possibly originate from a target) are identified and removed from further processing. This step typically results in
a dramatic data reduction. Possible target samples are then detected using a local flatness criterion, based on the
assumption that targets are among the most structured objects in the remaining data. The set of samples is reduced
further through shadow analysis, where any possible target locations are found by identifying regions that are occluded
by foreground objects. Since we anticipate that targets appear as fragmented, the remaining samples are grouped into a
set of larger segments, based on general target characteristics such as maximal dimensions and generic shape. Finally,
the segments, each of which corresponds to a target hypothesis, undergo automatic target recognition in order to find the
best match from a model library. The approach is evaluated in terms of ROC on real data from scenes in forested areas.
In object/target reconstruction and recognition based on laser radar data, the range value's accuracy is important. The range data accuracy depends on the accuracy in the laser radar's detector, especially the algorithm used for time-of-flight estimation. In this paper, a general direct-detection laser radar system applicable for hard-target measurements is modeled. The time- and range-dependent laser radar cross sections are derived for some simple geometric shapes (plane, cone, sphere, and paraboloid). The cross-section models are used, in simulations, to find the proper statistical distribution of uncertainties in time-of-flight range estimations. Three time-of-flight estimation algorithms are analyzed: peak detection, constant-fraction detection, and matched filter. The detection performance for various shape conditions and signal-to-noise ratios is analyzed. Two simple shape reconstruction examples are shown, and the detectors' performance is compared with the Cramér-Rao lower bound. The performance of the peak detection and the constant-fraction detection is more dependent on the shape and noise level than that of the matched filter. For line fitting the matched filter performs close to the Cramér-Rao lower bound.
This paper presents the Swedish land mine and UXO detection project "Multi Optical Mine Detection System," MOMS, and the research carried out so far. The goal for MOMS is to provide knowledge and competence for fast detection of mines, especially surface laid mines, by the use of both active and passive optical sensors. A main activity was to collect information and gain knowledge about phenomenology; i.e. features or characteristics that can give a detectable signature or contrast between object and background, and to carry out a phenomenology assessment. A large effort has also been put into a scene description to support phenomenology assessment and provide a framework for further experimental campaigns. Also, some preliminary experimental results are presented and discussed.
In this paper, we present techniques related to registration and change detection using 3D laser radar data. First, an experimental evaluation of a number of registration techniques based on the Iterative Closest Point algorithm is presented. As an extension, an approach for removing noisy points prior to the registration process by keypoint detection is also proposed. Since the success of accurate registration is typically dependent on a satisfactorily accurate starting estimate, coarse registration is an important functionality. We address this problem by proposing an approach for coarse 2D registration, which is based on detecting vertical structures (e.g. trees) in the point sets and then finding the transformation that gives the best alignment. Furthermore, a change detection approach based on voxelization of the registered data sets is presented. The 3D space is partitioned into a cell grid and a number of features for each cell are computed. Cells for which features have changed significantly (statistical outliers) then correspond to significant changes.
As a part of the Swedish mine detection project MOMS, an initial field trial was conducted at the Swedish EOD and
Demining Centre (SWEDEC). The purpose was to collect data on surface-laid mines, UXO, submunitions, IED's, and
background with a variety of optical sensors, for further use in the project. Three terrain types were covered: forest,
gravel road, and an area which had recovered after total removal of all vegetation some years before. The sensors used in
the field trial included UV, VIS, and NIR sensors as well as thermal, multi-spectral, and hyper-spectral sensors, 3-D laser
radar and polarization sensors. Some of the sensors were mounted on an aerial work platform, while others were placed
on tripods on the ground. This paper describes the field trial and the presents some initial results obtained from the
A fusion approach in a query based information system is presented. The system is designed for querying multimedia data bases, and here applied to target recognition using heterogeneous data sources. The recognition process is coarse-to-fine, with an initial attribute estimation step and a following matching step. Several sensor types and algorithms are involved in each of these two steps. An independence of the matching results, on the origin of the estimation results, is observed. It allows for distribution of data between algorithms in an intermediate fusion step, without risk of data incest. This increases the overall chance of recognising the target. An implementation of the system is described.
This paper wil give an overview of 3D laser sensing and related activities at the Swedish Defence Research Agency (FOI) in the view of system needs and applications. Our activites include data collection of laser signatures for target and backgrounds at various wavelengths. We will give examples of such measurements. The results are used in building sythetic environments, modellin of laser radar systems and as training sets for development of algorithms for target recognition and weapon applications. Present work on rapid environment assessment includes the use of data from airborne laser for terrain mapping and depth sounding. Methods for automatic target detection and object classification (buildings, trees, man-made objects etc.) have been developed together with techniques for visualisation. This will be described in more detail in a separate paper. The ability to find and correctly identify "difficult" targets, being either at very long ranges, hidden in the vegetation, behind windows or under camouflage, is one of the top priorities for any military force. Example of such work will be given using range gated imagery and 3D scanning laser radars. Different kinds of signal processing approaches have been studied and will be presented more in two separate papers. We have also developed modeling tools for both 2D and 3D laser imaging. Finally we will discuss the use of 3D laser radars in some system applications in the light of new component technology, processing needs and sensor fusion.
Over the years imaging laser radar systems have been developed for both military and civilian (topographic) applications. Among the applications, 3D data is used for environment modeling and object reconstruction and recognition. The data processing methods are mainly developed separately for military or topographic applications, seldom both application areas are in mind. In this paper, an overview of methods from both areas is presented. First, some of the work on ground surface estimation and classification of natural objects, for example trees, is described. Once natural objects have been detected and classified, we review some of the extensive work on reconstruction and recognition of man-made objects. Primarily we address the reconstruction of buildings and recognition of vehicles. Further, some methods for evaluation of measurement systems and algorithms are described. Models of some types of laser radar systems are reviewed, based on both physical and statistical approaches, for analysis and evaluation of measurement systems and algorithms. The combination of methods for reconstruction of natural and man-made objects is also discussed. By combining methods originating from civilian and military applications, we believe that the tools to analyze a whole scene become available. In this paper we show examples where methods from both application fields are used to analyze a scene.
We present an approach to a general decision support system. The aim is to cover the complete process for automatic
target recognition, from sensor data to the user interface. The approach is based on a query-based information
system, and include tasks like feature extraction from sensor data, data association, data fusion and situation
analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target
recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low
The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown
but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are
used to select the models of interest in the matching step, where the target is matched with a number of target models,
returning a likelihood value for each model. Several methods and sensor data types are used in both steps.
The user communicates with the system via a visual user interface, where, for instance, the user can mark an
area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query
language developed for this type of applications, and an ontological system decides which algorithms should be
invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers
are given back to the user. The user does not need to have any detailed technical knowledge about the sensors
(or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.
Over the years imaging laser radar systems have been developed for military and civilian applications. Among the applications we note collection of 3D data for terrain modeling and object recognition. One part of the object recognition process is to estimate the size and orientation of the object. This paper concerns a vehicle size and orientation estimation process based on scanning laser radar data. Methods for estimation of length and width of vehicles are proposed. The work is based on the assumption that from a top view most vehicles' edges are approximately of rectangular shape. Thus, we have a rectangle fitting problem. The first step in the process is sorting of data into lists containing object data and data from the ground closest to the object. Then a rectangle with minimal area is estimated based on object data only. We propose an algorithm for estimation of the minimum rectangle area containing the convex hull of the object data. From the rectangle estimate, estimates of the length and width of the object can be retrieved. The first rectangle estimate is then improved using least squares methods based on both object and ground data. Both linear and nonlinear least squares methods are described. These improved estimates of the length and width are less biased compared to the initial estimates. The methods are applied to both simulated and real laser radar data. The use of the minimum rectangle estimator to retrieve initial parameters for fitting of more complex shapes is discussed.
Gated viewing using short pulse lasers and fast cameras offers many new possibilities in imaging compared with passive EO imaging. Among these we note ranging capability, large target-to-background contrast also in low visibility, good penetration capability trough obscurants and vegetation as well as through shadows in buildings, cars, etc. We also note that short wavelength laser systems have better angular resolution than long-wave infrared systems of the same aperture size. This gives an interesting potential of combined IR and laser systems for target detection and classification. Beside military applications civilian applications of gated viewing for search and rescue as well as vehicle enhanced vision and other applications are in progress. This presentation investigates the performance for gated viewing systems during different atmospheric conditions, including obscurants and gives examples of experimental data. The paper also deals with signal processing of gated viewing images for target detection. This is performed in two steps. First, image frames containing information of interest are found. In a second step those frames are investigated further to evaluate if man-made objects are present. In this step a sequence of images (video frames) are set up as a 3-D volume to incorporate spatial information. The object will then be detected using a set of quadrature filters operating on the volume.
Laser radar images differ generally from traditional contrast images in that they can be regarded as 3D geometric images. For this reason, methods for classification of man-made objects in laser radar images must be concerned with the 3D geometry of the objects but also with the uncertainties present in the images. These uncertainties are mainly due to deviations between registered and real positions, but also to missing data points. It will be demonstrated how the problem with the uncertainties in the images can be overcome in a technique that primarily is qualitative. The objects are first extracted from the image followed by a step in which their edges are determined. This representation is in a last step transformed into a formal qualitative representation used for classification of the objects through a matching process where the target objects are matched against objects in an object library. Finally, a way to calculate a possibility value that indicates our belief in the result of the match of a certain object is described.