Classical Directions of Arrival (DoA) algorithms estimate the time delays associated with the signal received at an array of sensors through phase information. Most existing wideband algorithms decompose the signals received by an antenna array into multiple narrowband frequencies, and then the wideband DOAs are estimated by coherent or incoherent combination of signal and noise subspace information at multiple source frequencies. A novel algorithm for finding the direction of arrival (DoA) for wide-band chirp sources is introduced in this study, where frequency shift rather than phase shift is utilized to estimate the signal time delays between sensors, eliminating many limitations due to phase ambiguity like spatial sampling, leading to finer angular resolution between multiple sources. The proposed algorithm processes the data using Discrete Chirp Fourier Transform (DCFT) that invokes the exact chirp model in the signal leading to more precise estimates compared to general wideband DoA methods that do not exploit the chirp model. Use of Compressed Sensing (CS) enables exploitation of the sparsity in the DCFT-domain data for highly accurate DoA estimation. Reduced number of measurements are required for CS optimization processing by making use of the sparsity of the DCFT coefficients. The proposed approach eliminates the need for correlation, iterations, and time-frequency analysis needed by many classical chirp signal parameter estimation algorithms. Theoretical derivation is given and simulation results of the new algorithm for single and multiple wide-band chirp sources show significant performance enhancement even in highly noisy environment.
The Eigen-Template (ET) based closed-set feature extraction approach is extended to an open-set HRR-ATR framework to develop an Open Set Probabilistic Support Vector Machine (OSP-SVM) classifier. The proposed ET-OSP-SVM is shown to perform open set ATR on HRR data with 80% PCC for a 4-class MSTAR dataset.
Vibrometry offers the potential to classify a target based on its vibration spectrum. Signal processing is necessary for extracting features from the sensing signal for classification. This paper investigates the effects of fundamental frequency normalization on the end-to-end classification process . Using the fundamental frequency, assumed to be the engine’s firing frequency, has previously been used successfully to classify vehicles [2, 3]. The fundamental frequency attempts to remove the vibration variations due to the engine’s revolution per minute (rpm) changes. Vibration signatures with and without fundamental frequency are converted to ten features that are classified and compared. To evaluate the classification performance confusion matrices are constructed and analyzed. A statistical analysis of the features is also performed to determine how the fundamental frequency normalization affects the features. These methods were studied on three datasets including three military vehicles and six civilian vehicles. Accelerometer data from each of these data collections is tested with and without normalization.
In vehicle target classification, contact sensors have frequently been used to collect data to simulate laser vibrometry data. Accelerometer data has been used in numerous literature to test and train classifiers instead of laser vibrometry data  . Understanding the key similarities and differences between accelerometer and laser vibrometry data is essential to keep progressing aided vehicle recognition systems. This paper investigates the contrast of accelerometer and laser vibrometer data on classification performance. Research was performed using the end-to-end process previously published by the authors to understand the effects of different types of data on the classification results. The end-to-end process includes preprocessing the data, extracting features from various signal processing literature, using feature selection to determine the most relevant features used in the process, and finally classifying and identifying the vehicles. Three data sets were analyzed, including one collection on military vehicles and two recent collections on civilian vehicles. Experiments demonstrated include: (1) training the classifiers using accelerometer data and testing on laser vibrometer data, (2) combining the data and classifying the vehicle, and (3) different repetitions of these tests with different vehicle states such as idle or revving and varying stationary revolutions per minute (rpm).
This paper evaluates and expands upon the existing end-to-end process used for vibrometry target classification and identification. A fundamental challenge in vehicle classification using vibrometry signature data is the determination of robust signal features. The methodology used in this paper involves comparing the performance of features taken from automatic speech recognition, seismology, and structural analysis work. These features provide a means to reduce the dimensionality of the data for the possibility of improved separability. The performances of different groups of features are compared to determine the best feature set for vehicle classification. Standard performance metrics are implemented to provide a method of evaluation. The contribution of this paper is to (1) thoroughly explain the time domain and frequency domain features that have been recently applied to the vehicle classification using laser-vibrometry data domain, (2) build an end-to-end classification pipeline for Aided Target Recognition (ATR) with common and easily accessible tools, and (3) apply feature selection methods to the end-to-end pipeline. The end-to-end process used here provides a structured path for accomplishing vibrometry-based target identification. This paper will compare with two studies in the public domain. The techniques utilized in this paper were utilized to analyze a small in-house database of several different vehicles.
The UHF band in SAR has foliage penetration and limited ground penetration capability, while LIDAR scans are capable
of providing elevation information of objects on the terrain. In this paper, we integrate the complementary strengths of
these two different classes of sensors to locate buried objects with improved precision. The main underlying concept is
that the buried targets are discernible only in UHF-SAR space while LIDAR is rich with above-ground False Alarm
information. The LIDAR elevation information at the changes and anomalies are exploited to rule out above-ground
false-alarms in the UHF-SAR domain, thereby isolating the buried IEDs. Definitive proof-of-concept validation is given
for same-day/single-pass buried object detection capability using single-pass SAR anomaly detection with LIDAR
fusion. We also demonstrate significant performance improvement with 2-pass SAR change detection with LIDAR
integration. Detection performance is further enhanced via exploitation of multiple polarizations and multiple passes for
SAR data. The proposed SAR-LIDAR fusion strategy is shown to detect emplaced buried objects with an order of
magnitude improvement in detection performance, i.e., achieve higher PD at lower PFA when compared with SAR-only
performance. The proof-of-concept research is demonstrated on simultaneous multisensor UHF-SAR/LIDAR data
collected under JIEDDO's HALITE-1 program.
Change Detection (CD) is the process of identifying temporal or spectral changes in signals or images. Detection and
analysis of change provide valuable information of transformations in a scene. Hyperspectral sensors provide spatial and
spectrally rich information that can be exploited for Change Detection. This paper develops and analyzes various CD
algorithms for the detection of changes using single-pass and multi-pass Hyperspectral images. For the validation and
performance comparisons, changes obtained are compared for the conventional similarity correlation coefficient as well
as traditional change detection algorithms, such as image differencing, image ratioing, and principle component analysis
(PCA) methods. Another main objective is to incorporate Kernel based optimization by using a nonlinear mapping
function. Development of nonlinear versions of linear algorithms allows exploiting nonlinear relationships present in the
data. The nonlinear versions, however, become computationally intensive due to the high dimensionality of the feature
space resulting in part from application of the nonlinear mapping function. This problem is overcome by implementing
these nonlinear algorithms in the high-dimensional feature space in terms of kernels. Kernelization of a similarity
correlation coefficient algorithm for Hyperspectral change detection has been studied. Preliminary work on dismount
tracking using change detection over successive HSI bands has shown promising results. CD between multipass HSI
image cubes elicits the changes over time, whereas changes between spectral bands for the same cube illustrate the
spectral changes occurring in different image regions, and results for both cases are given in the paper.
Advances in Hyperspectral imaging (HSI) sensor offer new avenues for precise detection, identification and
characterization of materials or targets of military interest. HSI technologies are capable of exploiting 10s to 100s of
images of a scene collected at contiguous or selective spectral bands to seek out mission-critical objects. In this paper,
we develop and analyze several HSI algorithms for detection, recognition and tracking of dismounts, vehicles and other
objects. Preliminary work on detection, classification and fingerprinting of dismount, vehicle and UAV has been
performed using visible band HSI data. The results indicate improved performance with HSI when compared to
traditional EO processing. All the detection and classification results reported in this paper were based on single HSI
pixel used for testing. Furthermore, the close-in Hyperspectral data were collected for the experiments at indoor or
outdoor by the authors. The collections were taken in different lighting conditions using a visible HSI sensor. The
algorithms studied for performance comparison include PCA, Linear Discriminant Analysis method (LDA), Quadratic
classifier and Fisher's Linear Discriminant and comprehensive results have been included in terms of confusion matrices
and Receiver Operating Characteristic (ROC) curves.
The primary goal of this paper is to develop Hyperspectral algorithms for early detection of a readout system used in
conjunction with plants designed to de-green or discolor after detection of explosives, harmful chemicals, and
environmental pollutants. Work in progress is aimed to develop a new class of biosensors or Plant Sentinels that can
serve as inexpensive plant-based biological early-warning systems capable of detecting substances that are harmful to
human or the environment [LoHe03]. The de-greening circuits in the laboratory plant, Arabidopsis, have been shown to
induce rapid chlorophyll loss, thereby change color under the influence of synthetic estrogens. However, as of now, the
bio de-greening phenomenon is detectable by human eyes or with a system (chlorophyll fluorescence) that works best in
laboratory conditions. In order to make the plant sentinel system practically viable, we have developed automated
monitoring scheme for early detection of the de-greening phenomenon. The automated detection capability would lead to
practical applicability and wider usage. This paper presents novel and effective HSI-based algorithms for early detection
of de-greening of plants and vegetation due to explosives or chemical agents. The image processing based automated degreening
detector, presented in this paper will be capable of 24/7 monitoring of the plant sentinels and to detect minutest
possible discoloration of the plant-sensors to serve as an early-warning system. We also present preliminary results on
estimating the length of time that the explosive or chemical agent has been present.
We propose a novel approach to focus and geolocate moving targets in synthetic aperture radar imagery. The initial step is to detect the position of the target using an automatic target detection algorithm. The next step is to estimate the target cross-range velocity using sequential sub-apertures; this is done by forming low resolution images and estimating position as a function of sub-aperture, thus yielding an estimate of the cross-range velocity. This cross-range estimate is then used to bound the search range for a bank of focusing filters. Determining the proper velocity that yields the best focused target defines an equation for the target velocity, however both components of the targets velocity can not be determined from a single equation. Therefore, a second image with a slightly different heading is needed to yield a second focusing velocity, and then having a system of two equations and two unknowns a solution can be obtained. Once the target velocity is known the proper position can be determined from the range velocity. Synthetic data will be used with a point source target and both background clutter and noise added. The results support the development of staring radar applications with much larger synthetic aperture integration times in comparison to existing SAR modes. The basic idea of this approach is to trade-off the development of expensive phased-array technology for GMTI applications with the potential development of advanced processing methods that show potential for processing data over very large aperture integration intervals, to obtain similar GMTI geolocation results that would be compatible with current radar technology.
Our proposed research is to focus and geolocate moving targets in synthetic aperture radar imagery. The first step is to estimate the target cross-range velocity using sequential sub-apertures; this is done by forming low resolution images and estimating position as a function of sub-aperture, thus yielding an estimate of the cross-range velocity. This cross-range estimate is then used to bound the search range for a bank of focusing filters. Determining the proper velocity that yields the best focused target defines an equation for the target velocity, however both components of the targets velocity can not be determined from a single equation. Therefore, a second image with a slightly different heading is needed to yield a second focusing velocity, and then having a system of two equations and two unknowns a solution can be obtained. Once the target velocity is known the proper position can be determined from the range velocity.
This paper extends simulation and target detection results from an investigation entitled "Self-Training Algorithms for Ultra-wideband SAR Target Detection" that was conducted last year and presented at the 2003 SPIE Aerosense Conference on "Algorithms for Synthetic Aperture Radar Imagery." Under this approach, simulated SAR impulse clutter data was generated by modulating a tophat model for the SAR video phase history with K-distributed data models. Targets were synthesized and "instanced" within the SAR image via the application of a dihedral model to represent broadside targets. For this paper, these models are extended and generalized by developing a set of models that approximate major scattering mechanisms due to terrain relief and approximate major scattering mechanisms due to scattering from off-angle targets. Off-angle targets are difficult to detect at typical ultra-wideband radar frequencies and are denoted as "diffuse scatterers." Potential approaches for detecting synthetic off-angle targets that demonstrate this type of "diffuse scattering" are developed and described in the algorithms and results section of the paper. A preliminary set of analysis outputs are presented with synthetic data from the resulting simulation testbed.
Air-to-Ground targeting in a kill chain requires timely information flow and coordination of a large number of complex and disparate systems and events that are distributed in space and time. Modeling and simulation of such complex systems poses a considerable challenge to the system developers. Colored Petri Nets (CPN) provide a well-established graphically-oriented simulation tool with the capability to incorporate design and performance specifications and operational requirements of complex discrete-event systems for verification and validation under a variety of input stimuli. In this paper, we present the results of a preliminary study of implementing rudimentary, yet realistic, kill chain modules for Air-to-Ground combat using CPN tools. We have developed top-level functional kill chain modules incorporating its primary functions. It is expected that the modularity of the CPN-based simulation framework will enable us to incorporate further details and breadth in system complexities in order to study real world kill-chain simulation environment as an integral part of DOD's C4ISR architecture.
It is becoming more important for the designer of radar (and other military sensing) systems to be able to provide military commanders and procurement decision makers with a concept of how a new system can enhance warfighting capability. Showing enhanced sensor performance is no longer sufficient to sell a new system. In order to better understand issues relating to sensor employment, we develop a top-level functional architecture of the kill chain for Air-to-Ground targeting. A companion paper constructs an executable model in the form of a Colored Petri Net (CPN) from the architecture. The focus on architecture that we present here aligns well with the new Department of Defense guidance, which requires new acquisition programs to be structured around system architectures. This should provide a common reference system for communication among warfighters, planners, and technologists. The translation to an executable model should allow identification of technology insertion points.
In this paper, a new 1-D hybrid Automatic Target Recognition (ATR) algorithm is developed for sequential High Range Resolution (HRR) radar signatures. The proposed hybrid algorithm combines Eigen-Template based Matched Filtering (ETMF) and Hidden Markov modeling (HMM) techniques to achieve superior HRR-ATR performance. In the proposed hybrid approach, each HRR test profile is first scored by ETMF which is then followed by independent HMM scoring. The first ETMF scoring step produces a limited number of "most likely" models that are target and aspect dependent. These reduced number of models are then used for improved HMM scoring in the second step. Finally, the individual scores of ETMF and HMM are combined using Maximal Ratio Combining to render a classification decision. Classification results are presented for the MSTAR data set via ROC curves.
An ultra-wideband (UWB) synthetic aperture radar (SAR) simulation technique that employs physical and statistical models is developed and presented. This joint physics/statistics based technique generates images that have many of the "blob-like" and "spiky" clutter characteristics of UWB radar data in forested regions while avoiding the intensive computations required for the implementation of low-frequency numerical electromagnetic simulation techniques.
Approaches towards developing "self-training" algorithms for UWB radar target detection are investigated using the results of this simulation process. These adaptive approaches employ some form of modified singular value decomposition (SVD) algorithm where small blocks of data in the neighborhood of a sliding test window are processed in real-time in an effort to estimate localized clutter characteristics. These real-time local clutter models are then used to cancel clutter in the sliding test window. Comparative results from three SVD-based approaches to adaptive and "self-trained" target detection algorithms are reported. These approaches are denoted as "Energy-Normalized SVD", "Condition-Statistic SVD", and "Terrain-Filtered SVD". The results indicate that the "Terrain-Filtered SVD" approach, where a pre-filter is applied in an effort to eliminate severe clutter discretes that adversely effect performance, appears promising for the purposes of developing "self-training" algorithms for applications that may require localized "on-the-fly" training due to a lack of accurate off-line training data.
A number of aspects of ultra-wideband radar target detection analysis and algorithm development are addressed. The first portion of the paper describes a bi-modal technique for modeling ultra-wideband radar clutter. This technique was developed based on an analysis of ultra-wideband radar phenomenology. Synthetic image samples that were generated by this modeling process are presented. This sample set is characterized by a number of physical parameters. The second portion of this paper describes an approach to developing a class of filters, known as rank-order filters, for ultra-wideband radar target detection applications. The development of a new rank-order filter denoted as a discontinuity filter is presented. Comparative target detection results are presented as a function of data model parameters. The comparative results include discontinuity filter performance versus the performance of median filtering and CFAR filtering.
This paper presents recent ATR results with High Range Resolution (HRR) profiles used for classification of ground targets. Our previous work has demonstrated that effective HRR-ATR performance can be achieved if the templates are formed via Singular Value Decomposition (SVD) of detected HRR profiles and the classification is performed using normalized Matched Filtering (MF) [1, 2]. It had been shown theoretically in [1, 2] that the eigen-vectors are the optimal feature set representation of a collection of HRR profile vectors and we had proposed to use the dominant range- space eigen-vectors as templates, known as Eigen-Templates (ET). However, in [1, 2], HRR-ATR performance using the Eigen Template-Matched Filter (ETMF) combination had been applied to the forced decision case only using the XPATCH data sets. In this paper, we demonstrate the effectiveness in HRR- ATR performance of the ETMF approach by incorporating unknown target scenario . All results in this paper use the public release MSTAR data. Furthermore, in our earlier work, HRR testing data was used without any additive noise, where it was found that detected-HRR data preprocessed by Power Transform (PT) can enhance ATR performance. However, results and analysis presented in this paper demonstrate that PT pre- processing when applied to noisy observation profiles tend to obscure the target information in the HRR profiles considerably which in turn leads to considerable deterioration in HRR-ATR performance. Hence, we argue in this paper that PT pre-processing should be avoided in practice in all HRR-ATR implementations. Instead, we show that the proposed ETMF with appropriate alignment and normalization of template and observation profiles can achieve excellent HRR-ATR performance.Extensive simulation studies have been carried out to validate the proposed approach. Results are presented for different noise levels in terms of Receiver Operating Characteristics (ROC) curves.
The goal of this paper is to demonstrate the benefits of a tracking and identification algorithm that uses a belief data association filter for target recognition. By associating track and ID information, the belief filter accumulates evidence for classifying High-Range Resolution (HRR) radar signatures from a moving target. A track history can be utilized to reduce the search space of targets for a given pose range. The technique follows the work of Mitchell and Westerkamp by processing HRR amplitude and location feature sets. The new aspect of the work is the identification of multiple moving targets of the same type. The conclusions from the work is that moving ATR from HRR signatures necessitates a track history for robust target ID.
The problem of estimating the dimension parameters of rectangular and circular cavities from scattering data is addressed in this paper. By incorporating the modal solutions of the waveguides in the Complex Phase History Signal model, the Maximum-Likelihood Estimation (MLE) criterion has been developed for the rectangular cavity. It is shown that the MLE is a multi-dimensional non-linear optimization problem and optimization techniques have been proposed to address this problem. The cavity detection problem has also been formulated within the multiple Hypotheses testing framework. The proposed ML estimation approaches have been verified with simulation studies.
This paper presents ATR results with High Range Resolution (HRR) profiles used for classification. It is shown that effective HRR-ATR performance can be achieved if the templates are formed via Singular Value Decomposition (SVD) of detected HRR profiles. It is demonstrated theoretically that in the mean-squared sense, the eigen-vectors represent the optimal feature set. SVD analysis of a large class of XPATCH and MSTAR HRR-data clearly indicates that significant proportion (> 90%) of target energy is accounted for by the eigen-vectors of range correlation matrix, corresponding to only the largest singular value. The SV Decomposition also decouples the range and angle basis spaces. Furthermore, it is shown that significant clutter reduction can be achieved if HRR data is reconstructed using only the significant eigenvectors. ATR results with eigen-templates are compared with those based on mean-templates. Results are included for both XPATCH and MSTAR data using linear least- squares and matched-filter based classifiers.