We address the problem of degree of polarization estimation in active polarimetric images acquired with laser illumination. This technique provides two images of the same scene which are perturbed with peckle noise. Because of the presence of non homogeneity in the reflected intensity, it can be preferable to estimate the degree of polarization from the "Orthogonal State Contrast Image". It has also been shown that a simple nonlinear transformation of this image leads to data perturbed with additive symmetrical noise, on which simple and efficient estimation and detection techniques can be applied. We propose in this paper to analyse estimation properties of the degree of polarization in these different imaging configurations by comparing the Cramer-Rao bounds for unbiased estimation. We will deduce from this analysis some useful prescriptions for exploiting polarimetric data from such imaging systems.
High resolution ground mapping is of interest for survey and management of long linear features such as roads, railways and pipelines, and for georeferencing of areas such as flood plains for hydrological purposes. ATLAS (Airborne Topographic Laser System) is an active linescan system operating at the eyesafe wavelength of 1.5μm. Built for airborne survey, it is currently certified for use on a Twin Squirrel helicopter for operation from low levels to heights above 500 feet allowing commercial survey in built up areas. The system operates at a pulse repetition frequency of 56kHz with a line completed in 15ms, giving 36 points/m2 at the surface at the design flight speed. At each point the range to the ground is measured together with the scan angle of the system. This data is combined with a system attitude measurement from an integrated inertial navigation system and with system position derived from differential GPS data aboard the platform. A recording system captures the data with a synchronised time-stamp to enable post-processed reconstruction of a cloud of data points that will give a three-dimensional representation of the terrain, allowing the points to be located with respect to absolute Earth referenced coordinates to a precision of 5cm in three axes. This paper summarises the design, harmonisation, evaluation and performance of the system, and shows examples of survey data.
We describe a new project (acronym LISATNAS) approved by the Lithuanian Research Council in 2003 devoted to the development of differential absorption lidar (DIAL) and stationary spectrometric systems based on the mid-infrared tunable Optical Parametric Oscillator (OPO), pumped by compact Q-switched lasers. The purpose of the project is to construct a mobile infrared lidar, assembled in the truck for selective pollutant analysis - possessing spatial resolution of a few meters in the distance range extending from hundred of meters to a few kilometers. A reliable cascade mid-IR generation scheme was developed. Pulse energies up to milijoule in mid-IR have been already obtained using nonlinear AgGaSe2 crystal. Optoacoustic and multipass cells were constructed for stationary spectrometers. Preliminary results with detection of CO2, CH4, H2O and other gases in the ppm concentration range show good sensitivity. Special pollutants were synthesized by chemical group of the project for spectrometric experiments: multiatomic nitrocompounds, such as trinitrotoluen (TNT) or trotyl, DNT (dinitrotoluoen), MNT (mononitrotoluoen) and RDX (heksahydro-1.3.5-triazyn). The mobile DIAL system based on the tunable laser in the 8-12 μm region, 10" goldmirror telescope, MCT cooled detector with control electronics is under construction and should be finished in 2005.
A double panoramic lens with the ideal equi-distance projection scheme has been designed and fabricated with a view of realizing a panoramic rangefinder based on vision. The vertical field of view is 110° extending from the nadir (-90°) to 20° above the horizon.
Potential approaches for the simulation of the effect of atmospheric turbulence and target speckle on active imaging systems are considered. In particular, computationally tractable methods of applying representative degradation to simulated burst illumination laser imagery are investigated. This is motivated by the fact that the traditional phase screens approach for the simulation of the effect of a turbulent atmosphere on light propagation can require large computing resources to implement parameter sets approaching those appropriate for realistic scenarios. This is undesirable for scene simulation applications where there are typically already considerable demands on computing resources. This is the context in which the various options considered are assessed.
Exciting development is taking place in 3 D sensing laser radars. Scanning systems are well established for mapping from airborne and ground sensors. 3 D sensing focal plane arrays (FPAs) enable a full range and intensity image can be captured in one laser shot. Gated viewing systems also produces 3 D target information. Many applications for 3 D laser radars are found in robotics, rapid terrain visualization, augmented vision, reconnaissance and target recognition, weapon guidance including aim point selection and others. The net centric warfare will demand high resolution geo-data for a common description of the environment. At FOI we have a measurement program to collect data relevant for 3 D laser radars using airborne and tripod mounted equipment for data collection. Data collection spans from single pixel waveform collection (1 D) over 2 D using range gated imaging to full 3 D imaging using scanning systems. This paper will describe 3 D laser data from different campaigns with emphasis on range distribution and reflections properties for targets and background during different seasonal conditions. Example of the use of the data for system modeling, performance prediction and algorithm development will be given. Different metrics to characterize the data set will also be discussed.
This paper describes the experimental research efforts performed at the Swedish Defence Research Agency (FOI) concerning fundamental characterization of different wall and clothing materials as well as through-the-wall imaging.
Results from on-going activities at FOI concerning material characterization in the millimeter wave range are presented. Wide-band measurements of five building materials have been carried out in two different ways. In the frequency range 0.04-40 GHz a vector network analyzer was used and the samples were positioned in waveguides. In the 2-120 GHz region a scalar network analyzer was used and transmission measurements of the materials were performed in free space. Transmission measurements in free space of two clothing materials were also performed.
Results from measurements of a human target standing behind an inner wall are presented.
We propose to review two concepts that can be used for target detection and identification in optronic systems: lidar-radar and multipectral polarimetric active imaging.
The lidar-radar concept uses an optically pre-amplified intensity modulated lidar, where the modulation frequency is in the microwave domain (1-10 GHz). Such a system permits to combine directivity of laser beams with mature radar processing. As an intensity modulated or dual-frequency laser beam is directed onto a target, the backscattered intensity is collected by an optical system, pass through an optical preamplifier, and is detected on a high speed photodiode in a direct detection scheme. A radar type processing permits then to extract range, speed and profile of the target for identification purposes. The association of spatially multimode amplifier and direct detection allows low sensitivity to atmospheric turbulence and large field of view. We present here the analysis of a lidar-radar that uses a radar waveform dedicated to range resolution. Preliminary experimental results are presented and discussed.
For the multispectral polarization active imaging concept, the acquisition, at different wavelengths, of images coded in intensity and in degree of polarization enables to get information about the spectral signature of targets as well as their polarization properties. A theoretical analysis and a experimental validation of this technique are presented. Preliminary experiments, using a monostatic configuration, will be also presented.
The requirement for realistic simulation of military scenarios arises from a dearth of suitable and accessible measured data. Furthermore, measurement campaigns are restricted by the trial locality and availability of appropriate targets. Targets located in and around tree-lines are of particular interest, as they present scenarios that conventional broadband sensor systems find problematic. Utilising the spectral component of scenes, through the use multi- or hyperspectral technologies, can be beneficial in detecting these difficult targets.
In this paper we describe the use of a Monte Carlo ray-tracing model (FLIGHT) to simulate forest scenes. This model is capable of calculating the interesting BRDF properties specific to forests. Targets are also incorporated in these simulations, and we describe contrast discrimination of the target from the background. This technique has application for targets in deep hide as well as at the forest edge (i.e., in a tree-line).
Assessment methods that can be applied to simulated hyperspectral imagery are investigated, to determine how realistic these scenes are in comparison to measurement. This is of key importance in ensuring that simulated imagery, as well as measured data, can be used to assess algorithmic techniques to detect and discriminate targets. Statistical assessment measures are discussed that utilise the spatial and spectral properties of the image.
Most target detection algorithms employed in hyperspectral remote sensing rely on a measurable difference between the spectral signature of the target and background. Matched filter techniques which utilise a set of library spectra as filter for target detection are often found to be unsatisfactory because of material variability and atmospheric effects in the field data. The aim of this paper is to report an algorithm which extracts features directly from the scene to act as matched filters for target detection. Methods based upon spectral unmixing using geometric simplex volume maximisation (SVM) and independent component analysis (ICA) were employed to generate features of the scene. Target and background like features are then differentiated, and automatically selected, from the endmember set of the unmixed result according to their statistics. Anomalies are then detected from the selected endmember set and their corresponding spectral characteristics are subsequently extracted from the scene, serving as a bank of matched filters for detection. This method, given the acronym SAFED, has a number of advantages for target detection, compared to previous techniques which use orthogonal subspace of the background feature. This paper reports the detection capability of this new technique by using an example simulated hyperspectral scene. Similar results using hyperspectral military data show high detection accuracy with negligible false alarms. Further potential applications of this technique for false alarm rate (FAR) reduction via multiple approach fusion (MAF), and, as a means for thresholding the anomaly detection technique, are outlined.
This paper reports the result of a study on how atmospheric correction techniques (ACT) enhance target detection in hyperspectral remote sensing, using different sets of real data. Based on the data employed in this study, it has been shown that ACT can reduce the masking effect of the atmosphere and effectively improving spectral contrast. By using the standard Kmeans cluster based unsupervised classifier, it has been shown that the accuracy of the classification obtained from the atmospheric corrected data is almost an order of magnitude better than that achieved using the radiance data. This enhancement is entirely due to the improved separability of the classes in the atmospherically corrected data. Moreover, it has been found that intrinsic information concerning the nature of the imaged surface can be retrieved from the atmospherically corrected data. This has been done to within an error of 5% by using a model based atmospheric correction package ATCOR.
The majority of anomaly detection processes used for hyperspectral image data are based on pixel-by-pixel whitening and thresholding operations using local area statistics. This paper discusses an alternative approach to anomaly detection in which a mixture model is fitted to the whole of the image. This mixture model may be used to segment the image into component memberships and these may, in turn, be used for anomaly detection.
In this study the mixture model is generated for the whole scene using the stochastic expectation maximization (SEM) algorithm. This is parameterized such that mixture components consisting of small numbers of pixels are eliminated. The maximum a-posteriori probability (MAP) mixture component for each pixel is then determined. The pixel may then be examined using a conventional statistical hypothesis test to see whether it is plausible that it was drawn from the distribution of the identified component, at a given significance level.
This anomaly detection process has been examined using both synthetic and real hyperspectral imagery and results are presented here for real data containing no known military targets and for synthesized imagery which includes military target pixels. A range of results is presented for different parameterizations of the SEM algorithm and significance test. These results include the component map of the imagery and anomalous pixel maps at given significance levels.
The joint transform correlator (JTC) is one of two main optical image processing architectures which provide us with a highly effective way of comparing images in a wide range of applications. Traditionally an optical correlator is used to compare an unknown input scene with a pre-captured reference image library, to detect if the reference occurs within the input. There is a new class of application for the JTC where they are used as image comparators, not having a known reference image, rather frames from a video sequence form both the input and reference. The JTC input plane is formed by combining the current frame with the previous frame in a video sequence and if the frames match, then there will be a correlation peak. If the objects move then the peaks will move (tracking) and if something has changed dramatically in the scene, then the correlation between the two frames is lost. This forms the basis of a very powerful application for the JTC in Defense and Security. Any change in the scene can be recorded and with the inherent shift invariance property of the correlator, any movement of the objects in the scene can also be detected. A major limitation of the JTC is its intolerance to rotation and scale changes in images. The strength of the correlation signal decreases as the input object rotates or varies in scale relative to the reference object. We have designed a binary phase only filter using the direct binary search algorithm for rotation invariant pattern recognition to be implemented on a JTC and compared to a classical synthetic discriminant function (SDF) filter. Results show that the performance of the DBS filter is better than the SDF filter.
A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don’t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.
A portable programmable opto-electronic analogic CNN computer (Laptop-POAC) has been built and used to recognize and track targets. Its kernel processor is a novel type of high performance optical correlator based on the use of bacteriorhodopsin (BR) as a dynamic holographic material. This optical CNN implementation combines the optical computer's high speed, high parallelism (≈106 channel) and large applicable template sizes with flexible programmability of the CNN devices. Unique feature of this optical array computer is that programming templates can be applied either by a 2D acousto-optical deflector (up to 64x64 pixel size templates) incoherently or by an LCD-SLM (up to 128x128 size templates) coherently. So it can work both in totally coherent and partially incoherent way, utilizing the actual advantages of the used mode of operation. Input images are fed-in by a second LCD-SLM of 600x800 pixel resolution. Evaluation of trade-off between speed and resolution is given. Novel and effective target recognition and multiple-target-tracking algorithms have been developed for the POAC. Tracking experiments are demonstrated. Collision avoidance experiments are being conducted. In the present model a CCD camera is recording the correlograms, however, later a CNN-UM chip and a high-speed CMOS camera will be applied for post-processing.
Three-dimensional sensors based on Laser Radar (LADAR) technology possess vast potential for the future battlefield.
This work presents an algorithm for the recognition of T62 and T72 tanks from 3D imagery.
The algorithm consists of several stages:
a) Pre-processing of LADAR images to remove range noise and to determine ground level.
b) Segmentation to extract regions that fulfill certain pre-defined conditions.
c) Extraction of specific tank features from each region.
d) Applying a Fuzzy Logic classifier on the feature vector to discriminate between T62 or T72 tanks
and other type of targets or natural clutter.
A commercial airborne LADAR sensor was used to acquire images from an area of 40 square kilometers with a measurement
density of 20 pixels per square meter and a range noise of 15 cm (1 sigma). The images included more than a hundred man-made objects (tanks, armored personnel carriers, trucks, cranes)along with natural clutter (vegetation and boulders). Among the targets were 18 tanks,
two of which were covered with a camouflage net. The algorithm recognized the 16 uncovered tanks with a False Alarm Rate (FAR) of 0.025 per square kilometer. This FAR value is better than the respective FAR values derived for 2D Imaging where Automatic Target Recognition (ATR) techniques are applied.
These results show promise for automatic recognition of various targets employing LADAR sensors.
We present a two-stage process for target identification and pose estimation. A database of possible target states, i.e. identity and pose, is precomputed by a two-step clustering procedure, reflecting the two stages of the identification process. The current database is based on images generated from 3D CAD models of military ground vehicles on which realistic infrared textures have been applied. At the coarse level, the database is divided into a set of clusters, each represented by a small set of eigenimages, obtained through principal component analysis (PCA). The classification at this level is achieved by measuring the orthogonal distance between the region of interest (ROI) and the eigenspace of each cluster. Each cluster itself contains a few subclusters. A support vector machine is employed for a pairwise discrimination of subclusters. The likelihood that the target belongs to a particular cluster/subcluster is based on histograms, obtained at the time of training of the system. In addition to the classification of individual images it is also possible to handle image sequences where the pose of the target might vary in subsequent image frames. In this situation, the pose is assumed to change according to a first-order Markov process. The overall probability for each target state is accumulated through recursive Bayesian estimation. The performance of the above procedure has been evaluated through the identification of targets in synthetic image sequences, where the targets are placed in realistic backgrounds. Currently , we are able to correctly identify the targets in more than 80 percent of the image sequences. In about 60 (80) percent of the cases the pose can be estimated within an accuracy of 10 (20) degrees. The accuracy of the pose estimation is limited by the size of the sub-clusters.
The presentation of nonlinear optical correlations using a joint transform correlator operating in phase-only spatial light modulation at input joint transform plane is studied here. Amplitude input nonlinear optical time sequential correlations have better discrimination and noise robustness than conventional linear correlations. These nonlinear correlations are based on decomposing reference and target into binary slices and adding the contribution of all linear correlations between them. Those correlations are implemented using a conventional joint transform correlator. However as the system has poor efficiency and low cross-correlation peak intensity working in amplitude mode, we use a phase-transformed input joint transform correlator in order to increase that efficiency and discrimination. We implement optically the phase morphological correlation and the phase sliced orthogonal nonlinear generalized correlation. We have applied the method to images degraded with high levels of substitutive noise and nonoverlapping background noise. The obtained results show how those nonlinear phase input encoded correlations have high discrimination capability detecting the target in cases where other well-known methods fail.
We present a method for coding the information of a 3D object into a representation on a unit sphere. The coding is based on mapping the phase Fourier transform of the range images of the object onto the sphere. This procedure creates a unique object signature that we call three-dimensional object orientation map (3DOOM). The correlation between 3DOOM of different objects is discussed and defined. The maps permit to obtain detection as well as the orientation of an object from a range image, using only partial object information.