Hyperspectral imagery (HSI) oﬀers numerous advantages over traditional sensing modalities with its high spectral content that allows for classiﬁcation, anomaly detection, target discrimination, and change detection. However, this imaging modality produces a huge amount of data, which requires transmission, processing, and storage resources; hyperspectral compression is a viable solution to these challenges. It is well known that lossy compression of hyperspectral imagery can impact hyperspectral target detection. Here we examine lossy compressed hyperspectral imagery from data-centric and target-centric perspectives. The compression ratio (CR), root mean square error (RMSE), the signal to noise ratio (SNR), and the correlation coeﬃcient are computed directly from the imagery and provide insight to how the imagery has been aﬀected by the lossy compression process. With targets present in the imagery, we perform target detection with the spectral angle mapper (SAM) and adaptive coherence estimator (ACE) and evaluate the change in target detection performance by examining receiver operating characteristic (ROC) curves and the target signal-to-clutter ratio (SCR). Finally, we observe relationships between the data- and target-centric metrics for selected visible/near-infrared to shortwave infrared (VNIR/SWIR) HSI data, targets, and backgrounds that motivate potential prediction of change in target detection performance as a function of compression ratio.
In our previous studies, vehicle surfaces’ vibrations caused by operating engines measured by Laser Doppler Vibrometer (LDV) have been effectively exploited in order to classify vehicles of different types, e.g., vans, 2-door sedans, 4-door sedans, trucks, and buses, as well as different types of engines, such as Inline-four engines, V-6 engines, 1-axle diesel engines, and 2-axle diesel engines. The results are achieved by employing methods based on an array of machine learning classifiers such as AdaBoost, random forests, neural network, and support vector machines. To achieve effective classification performance, we seek to find a more reliable approach to pick authentic vibrations of vehicle engines from a trustworthy surface. Compared with vibrations directly taken from the uncooperative vehicle surfaces that are rigidly connected to the engines, these vibrations are much weaker in magnitudes. In this work we conducted a systematic study on different types of objects. We tested different types of engines ranging from electric shavers, electric fans, and coffee machines among different surfaces such as a white board, cement wall, and steel case to investigate the characteristics of the LDV signals of these surfaces, in both the time and spectral domains. Preliminary results in engine classification using several machine learning algorithms point to the right direction on the choice of type of object surfaces to be planted for LDV measurements.
Vibrometry offers the potential to classify a target based on its vibration spectrum. Signal processing is necessary for extracting features from the sensing signal for classification. This paper investigates the effects of fundamental frequency normalization on the end-to-end classification process <sup></sup>. Using the fundamental frequency, assumed to be the engine’s firing frequency, has previously been used successfully to classify vehicles <sup>[2, 3]</sup>. The fundamental frequency attempts to remove the vibration variations due to the engine’s revolution per minute (rpm) changes. Vibration signatures with and without fundamental frequency are converted to ten features that are classified and compared. To evaluate the classification performance confusion matrices are constructed and analyzed. A statistical analysis of the features is also performed to determine how the fundamental frequency normalization affects the features. These methods were studied on three datasets including three military vehicles and six civilian vehicles. Accelerometer data from each of these data collections is tested with and without normalization.
In vehicle target classification, contact sensors have frequently been used to collect data to simulate laser vibrometry data. Accelerometer data has been used in numerous literature to test and train classifiers instead of laser vibrometry data<sup>  </sup>. Understanding the key similarities and differences between accelerometer and laser vibrometry data is essential to keep progressing aided vehicle recognition systems. This paper investigates the contrast of accelerometer and laser vibrometer data on classification performance. Research was performed using the end-to-end process previously published by the authors to understand the effects of different types of data on the classification results. The end-to-end process includes preprocessing the data, extracting features from various signal processing literature, using feature selection to determine the most relevant features used in the process, and finally classifying and identifying the vehicles. Three data sets were analyzed, including one collection on military vehicles and two recent collections on civilian vehicles. Experiments demonstrated include: (1) training the classifiers using accelerometer data and testing on laser vibrometer data, (2) combining the data and classifying the vehicle, and (3) different repetitions of these tests with different vehicle states such as idle or revving and varying stationary revolutions per minute (rpm).
Recently Laser Doppler Vibrometry (LDV) has been widely employed to achieve long-range sensing in military applications, due to its high spatial and spectral resolutions in vibration measurements that facilitates effective analysis using signal processing and machine learning techniques. Based on the collaboration of The City College of New York and the Air Force Research Laboratory in the last several years, we have developed a bank of algorithms to classify different types of vehicles, such as sedans, vans, pickups, motor-cycles and buses, and identify various kinds of engines, such as Inline-4, V6, 1- and 2-axle truck engines. Thanks to the similarities of the LDV signals to acoustic and other time-series signals, a large of body of existing approaches in literature has been employed, such as speech coding, time series representation, Fourier analysis, pyramid analysis, support vector machine, random forest, neural network, and deep learning algorithms. We have found that the classification results based on some of these methods are extremely promising. For instance, our vehicle engine classification algorithm based on the pyramid Fourier analysis of the engine vibration and fundamental frequencies of vehicle surfaces over the data collected by our LDV in the summer of 2014 have consistently attained 96% precision. In laboratory studies or well-controlled environments, a great array of high quality LDV measured points all over the vehicles are permitted by the vehicle owners, therefore extensive classifier training can be conducted to effectively capture the innate properties of surfaces in the space and spectral domains. However, in real contested environments, which are of utmost interest and practical importance to military applications, the uncooperative vehicles are either fast moving or purposively concealed and thus not many high quality LDV measurements can be made. In this work an intensive study is performed to compare the performance in vehicle classifications under the cooperative and uncooperative environments via LDV measurements based on a content-based indexing approach. The method uses an iterative Fourier analysis and an artificial feed-forward neural network. As our empirical studies have suggested, even in uncooperative and contested environments, with adequate training dataset for similar vehicles, our classification approach can still yield promising recognition rates.
A pixel-level Generalized Likelihood Ratio Test (GLRT) statistic for hyperspectral change detection is developed to mitigate false change caused by image parallax. Change detection, in general, represents the difficult problem of discriminating significant changes opposed to insignificant changes caused by radiometric calibration, image registration issues, and varying view geometries. We assume that the images have been registered, and each pixel pair provides a measurement from the same spatial region in the scene. Although advanced image registration methods exist that can reduce mis-registration to subpixel levels; residual spatial mis-registration can still be incorrectly detected as significant changes. Similarly, changes in sensor viewing geometry can lead to parallax error in an urban cluttered scene where height structures, such as buildings, appear to move. Our algorithm looks to the inherent relationship between the image views and the theory of stereo vision to perform parallax mitigation leading to a search result in the assumed parallax direction. Mitigation of the parallax-induced false alarms is demonstrated using hyperspectral data in the experimental analysis. The algorithm is examined and compared to the existing chronochrome anomalous change detection algorithm to assess performance.
A multi-modal (hyperspectral, multispectral, and LIDAR) imaging data collection campaign was conducted just south of Rochester New York in Avon, NY on September 20, 2012 by the Rochester Institute of Technology (RIT) in conjunction with SpecTIR, LLC, the Air Force Research Lab (AFRL), the Naval Research Lab (NRL), United Technologies Aerospace Systems (UTAS) and MITRE. The campaign was a follow on from the SpecTIR Hyperspectral Airborne Rochester Experiment (SHARE) from 2010. Data was collected in support of the eleven simultaneous experiments described here. The airborne imagery was collected over four different sites with hyperspectral, multispectral, and LIDAR sensors. The sites for data collection included Avon, NY, Conesus Lake, Hemlock Lake and forest, and a nearby quarry. Experiments included topics such as target unmixing, subpixel detection, material identification, impacts of illumination on materials, forest health, and in-water target detection. An extensive ground truthing effort was conducted in addition to collection of the airborne imagery. The ultimate goal of the data collection campaign is to provide the remote sensing community with a shareable resource to support future research. This paper details the experiments conducted and the data that was collected during this campaign.
Airborne imaging sensing systems are becoming more prevalent and are producing an ever increasing volume of
data to process, exploit, and disseminate (PED). Successful PED of this data requires file format standardization
to aid in rapid exploitation, database query, and dissemination. The NITF format leverages the power of the
JPEG 2000 standard for exploitation preferred compression profiles and facilitates rapid dissemination of the
processed and exploited imagery to any decision maker or tactical warfighter over even the most constrained
bandwidth limitations via the JPEG 2000 Interactive Protocol (JPIP). The NITF standard provides a recognizable
format handling and a documented and regulated means of metadata provision to the community. Adoption
of this NITF standard for geographically corrected imagery facilitates data quality, flexibility, and the potential
for downstream GIS data fusion for any R&D sensor and promises the quickest transition to operational status.
This paper outlines a review of the PED effort exercising a Commercial-Off-The-Shelf (COTS) software toolderived
preprocessing architecture for automated generation of orthorectifed NITF products derived from the
Airborne Cueing and Exploitation System Hyperspectral (ACES HY) sensor.
The nature of hyperspectral exploitation systems is such that a set of spectral imagery - and possibly a priori information
such as a supplied library of target spectral signatures - is ingested into an algorithm and a series of responses is output.
These responses must be scored for their accuracy against known target locations in the image set, from which algorithm
performance is then determined. We propose, implement, and demonstrate a new environment for visualizing this
process, which will aid not only the evaluator but also the algorithm developer in better understanding, characterizing,
and improving system performance, be it that of an anomaly detection, change detection, or material identification
Hyperspectral imagery (HSI) is a relatively new technology capable of relaying intensity information gathered from both
visible and non-visible ranges of the electromagnetic spectrum. HSI images can contain hundreds of bands, which
present a problem when an image analyst must select the most relevant bands from such an image for visualization,
particularly when the bands that are within the range of human vision are either not present or heavily distorted. It is
proposed here that two-dimensional principal component analysis (2DPCA) can aid in the automatic selection of the
bands from an HSI image that would best reflect visual information. The method requires neither prior knowledge of the
image contents nor the association between spectral bands and their center wavelengths.
Change Detection (CD) is the process of identifying temporal or spectral changes in signals or images. Detection and
analysis of change provide valuable information of transformations in a scene. Hyperspectral sensors provide spatial and
spectrally rich information that can be exploited for Change Detection. This paper develops and analyzes various CD
algorithms for the detection of changes using single-pass and multi-pass Hyperspectral images. For the validation and
performance comparisons, changes obtained are compared for the conventional similarity correlation coefficient as well
as traditional change detection algorithms, such as image differencing, image ratioing, and principle component analysis
(PCA) methods. Another main objective is to incorporate Kernel based optimization by using a nonlinear mapping
function. Development of nonlinear versions of linear algorithms allows exploiting nonlinear relationships present in the
data. The nonlinear versions, however, become computationally intensive due to the high dimensionality of the feature
space resulting in part from application of the nonlinear mapping function. This problem is overcome by implementing
these nonlinear algorithms in the high-dimensional feature space in terms of kernels. Kernelization of a similarity
correlation coefficient algorithm for Hyperspectral change detection has been studied. Preliminary work on dismount
tracking using change detection over successive HSI bands has shown promising results. CD between multipass HSI
image cubes elicits the changes over time, whereas changes between spectral bands for the same cube illustrate the
spectral changes occurring in different image regions, and results for both cases are given in the paper.
Advances in Hyperspectral imaging (HSI) sensor offer new avenues for precise detection, identification and
characterization of materials or targets of military interest. HSI technologies are capable of exploiting 10s to 100s of
images of a scene collected at contiguous or selective spectral bands to seek out mission-critical objects. In this paper,
we develop and analyze several HSI algorithms for detection, recognition and tracking of dismounts, vehicles and other
objects. Preliminary work on detection, classification and fingerprinting of dismount, vehicle and UAV has been
performed using visible band HSI data. The results indicate improved performance with HSI when compared to
traditional EO processing. All the detection and classification results reported in this paper were based on <i>single</i> HSI
pixel used for testing. Furthermore, the close-in Hyperspectral data were collected for the experiments at indoor or
outdoor by the authors. The collections were taken in different lighting conditions using a visible HSI sensor. The
algorithms studied for performance comparison include PCA, Linear Discriminant Analysis method (LDA), Quadratic
classifier and Fisher's Linear Discriminant and comprehensive results have been included in terms of confusion matrices
and Receiver Operating Characteristic (ROC) curves.