In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been
focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft
position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution
synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented
integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole
processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more
effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge
detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to
the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select
the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the
“MSTAR clutter” dataset were used to prove the effectiveness of the proposed algorithms.
This paper presents the concept of Synthetic Aperture Radar (SAR) and Interferemetric SAR (InSAR) georeferencing
algorithms dedicated for SAR based augmented Inertial Navigation Architecture (SARINA). The SARINA is a novel
concept of the Inertial Navigation System (INS), which utilized the SAR radar as an additional sensor to provide
information about the platform trajectory position and compensate an aircraft drift due to Inertial Measurement Unit
(IMU) errors, Global Positioning System (GPS) lack of integrity, etc.
Hyperspectral sensors allow a considerable improvement in the performance of a target recognition process to be achieved. This characteristic is particular interesting in a lot of military and civilian remote sensing applications, such as automatic target recognition (ATR) and surveillance of wide areas. In this framework, real time processing of the observed scenario is becoming a key issue, because it permits the operator to provide immediate assessment of the surveyed area. In the literature is presented a line-by-line real time implementation of the widely used Constrained Energy Minimization (CEM) target detector. However, experimental results show that sometimes the CEM filter produces False Alarms (FAs) corresponding to rare objects, whose spectra are angularly very different from the target signature and from the natural background classes in the image. A solution to such a problem is presented in this work: the proposed strategy is based on the decision fusion of the CEM and the SAM algorithms. Only those pixels that pass the CEM-stage are processed by the SAM algorithm. The second stage allows false alarms to be reduced by preserving most of target pixels. The fusion strategy is applied to an experimental hyperspectral data set to recognize a known green target. Detection performance is numerically evaluated and compared to the one of the classical CEM detector.
The Maximum Noise Fraction (MNF) transformation is frequently used to reduce multi/hyper-spectral data dimensionality. It explores the data finding the most informative features, i.e. the ones explaining the maximum signal to noise ratio. However, the MNF requires the knowledge of the noise covariance matrix. In actual applications such information is not available <i>a priori</i>; thus, it must be estimated from the image or from dark reference measurements. Many MNF based techniques are proposed in the literature to overcome this major disadvantage of the MNF transformation. However, such techniques have some limits or require <i>a priori</i> knowledge that is difficult to obtain. In this paper, a new MNF based feature extraction algorithm is presented: the technique exploits a linear multi regression method and a noise variance homogeneity test to estimate the noise covariance matrix. The procedure can be applied directly to the image in an unsupervised fashion. To the best of our knowledge, the MNF is usually performed to remove the noise content from multi/hyperspectral images, while its impact on image classification is not well explored in the literature. Thus, the proposed algorithm is applied to an AVIRIS data set and its impact on classification performance is evaluated. Results are compared to the ones obtained by the widely used PCA and the Min/Max Autocorrelation Fraction (MAF), which is an MNF based technique.
This paper addresses the problem of sub-pixel target detection in hyperspectral images assuming that the target spectral signature is deterministic and known. Hyperspectral image pixels are frequently a combination or mixture of disparate materials or components. The need of a quantitative pixel decomposition arises in many civilian and military applications such as material classification, anomaly and target detection. The Linear Mixing Model (LMM) is a widely used method in hyperspectral data analysis. It represents a mixed pixel as the sum of the spectra of known pure materials, called endmembers, weighted by their relative concentrations called abundance coefficients. However, the LMM does not take into account the natural spectral variability of the endmembers. This variability is well represented by the Stochastic Mixing Model (SMM), which provides a model for describing both mixed pixels in the scene and endmember spectral variations through a statistical model. Modeling the background spectrum as a Gaussian random vector with known mean spectrum and unknown covariance matrix, a novel SMM based Detector (ASMMD) is derived in this paper. The ASMMD theoretical performances are evaluated in a case study referring to actual conditions. The analysis is conducted by estimating the ASMMD parameters on an experimental data set acquired by the AVIRIS hyperspectral sensor and the results are compared with the ones achieved by the Adaptive Matched Subspace Detector (AMSD), based on the LMM.