We present an efficient and computationally simple approach for synthetic aperture radar (SAR) imaging in cases when the radar data have gaps, due to missing pulses and/or notches in the frequency band. Our method is a simple variation of gradient projection, in which the search path in each iteration is obtained by projecting the negative-gradient of the L1 norm onto a hyper-plane defining solutions which are consistent with the data. The computations are not complicated since the L1 gradient is simply equal to the sign() of the pixels in the image. Computational efficiency is obtained by incorporating the polar format algorithm, which accomplishes the projection operation using a fast Fourier transform. Sample results are presented using the AFRL Gotcha 2006 radar data set and the Space Dynamics Laboratory FlexSAR system.
Traditional ground moving target indicator (GMTI) processing attempts to separate moving objects in the scene from stationary clutter. Techniques such as space-time adaptive processing (STAP) require the use of an unknown covariance matrix of the interference (clutter, jamming, and thermal noise) that must be estimated from the remaining data not currently under test. Many problems exist with estimating the interference covariance including: heterogeneous, contaminated, and/or limited training data. There are many existing techniques for obtaining an interference covariance matrix estimate, most of which incorporate some kind of prior knowledge to improve the estimate. We propose a Bayesian framework that estimates both clutter and movers on a range-by- range basis without the explicit estimation of an interference covariance matrix. The approach incorporates the knowledge of an approximate digital elevation map (DEM), platform kinematics (platform velocity, crab angle, and antenna spacings), and the belief that movers are sparse in the scene. Computation using this Bayesian model is enabled by recent algorithm developments for fast inference on linear mixing models. The signal model and required processing steps are detailed. We test our approach using the KASSPER I dataset and compare the results to other current approaches.
Clutter suppression interferometry (CSI) has received extensive attention due to its multi-modal capability to detect slow-moving targets, and concurrently form high-resolution synthetic aperture radar (SAR) images from the same data. The ability to continuously augment SAR images with geo-located ground moving target indicators (GMTI) provides valuable real-time situational awareness that is important for many applications. CSI can be accomplished with minimal hardware and processing resources. This makes CSI a natural candidate for applications where size, weight and power (SWaP) are constrained, such as unmanned aerial vehicles (UAVs) and small satellites. This paper will discuss the theory for optimal CSI system configuration focusing on sparse time-varying transmit and receive array manifold due to SWaP considerations. The underlying signal model will be presented and discussed as well as the potential benefits that a sparse time-varying transmit receive manifold provides. The high-level processing objectives will be detailed and examined on simulated data. Then actual SAR data collected with the Space Dynamic Laboratory (SDL) FlexSAR radar system will be analyzed. The simulated data contrasted with actual SAR data helps illustrate the challenges and limitations found in practice vs. theory. A new novel approach incorporating sparse signal processing is discussed that has the potential to reduce false- alarm rates and improve detections.
A new methodology for geolocating slow moving targets using SAR images at multiple phase centers is shown here along with methods to minimize false targets. In an effort to isolate the true movers from the false targets, a new approach exploiting spatio-temporal connectivity in addition to signal processing algorithms involving imaging and interferometry is proposed here to geolocate the movers in a measured data set.
The paper represents investigations on SAR image statistics and adaptive signal processing for change detection. The investigations show that the amplitude distributions of SAR images with possibly detected changes, that is retrieved with a linear subtraction operator, can approximately be represented by the probability density function of the Gaussian or normal distribution. This allows emerging the idea to use the available adaptive signal processing techniques for change detection. The experiments indicate the promising change detection results obtained with an adaptive line enhancer, one of the adaptive signal processing technique. The experiments are conducted on the data collected by CARABAS, a UWB low frequency SAR system.
This paper investigates methodologies for predicting the smear signatures in broadside spotlight synthetic aperture radar imagery collections due to surface targets that are undergoing turning maneuvers. This analysis examines the case of broadside geometry wherein the radar moves with constant speed and heading on a level flight path. This investigation concentrates moving target smear issues that yield some defocus in the range direction, although much smaller in magnitude than the motion induced smearing in the radar cross-range direction. This paper focuses on the case of a target that executes a turning maneuver during the SAR collection interval. The SAR simulations are shown to give excellent agreement between the moving target signatures and the predicted shapes of the central contours.
The location of point scatterers in Synthetic Aperture Radar (SAR) data is exploited in several modern analyzes including persistent scatter tracking, terrain deformation, and object identification. The changes in scatterers over time (pulse-to-pulse including vibration and movement, or pass-to-pass including direct follow on, time of day, and season), can be used to draw more information about the data collection. Multiple pass and multiple antenna SAR scenarios have extended these analyzes to location in three dimensions. Either multiple passes at different elevation angles may be .own or an antenna array with an elevation baseline performs a single pass. Parametric spectral estimation in each dimension allows sub-pixel localization of point scatterers in some cases additionally exploiting the multiple samples in each cross dimension. The accuracy of parametric estimation is increased when several azimuth passes or elevations (snapshots) are summed to mitigate measurement noise. Inherent range curvature across the aperture however limits the accuracy in the range dimension to that attained from a single pulse. Unlike the stationary case where radar returns may be averaged the movement necessary to create the synthetic aperture is only approximately (to pixel level accuracy) removed to form SAR images. In parametric estimation increased accuracy is attained when two dimensions are used to jointly estimate locations. This paper involves jointly estimating azimuth and elevation to attain increased accuracy 3D location estimates. In this way the full 2D array of azimuth and elevation samples is used to obtain the maximum possible accuracy. In addition the independent dimension collection geometry requires choosing which dimension azimuth or elevation attains the highest accuracy while joint estimation increases accuracy in both dimensions. When maximum parametric estimation accuracy in azimuth is selected the standard interferometric SAR scenario results. When maximum estimation accuracy in elevation is selected the multiple baseline interferometric SAR scenario results. Use of a 2D parametric estimation method attains the best accuracy possible in both dimensions. When in some scenarios particularly the orbital case where the azimuth dimension is only approximately linear the full accuracy increase of linear joint azimuth and elevation is not fully attained. Images and point cloud estimates are shown for several linear and orbital SAR scenarios. Images provide a visual representation of the data while the quantitative point cloud data is a direct input for the multiple analyzes listed earlier.
This paper describes the impact of ground mover motion and windowing on stationary and moving shadows in Synthetic Aperture Radar (SAR) and video SAR mode imagery. The technique provides a foundation for optimizing algorithms that detect ground movers in SAR imagery. The video SAR mode provides a persistent view of a scene centered at the Motion Compensation Point (MCP). The radar platform follows a circular flight path. Detecting a stationary shadow in a SAR image is important because the shadow indicates a detection of an object with a height component near the shadow. Similarly, the detection of a shadow that moves from frame to frame indicates the detection of a ground mover at the location of the moving shadow. An approach analyzes the impact of windowing in calculating the brightness of a pixel in a stationary, finite-sized shadow region. An extension of the approach describes the pixel brightness for a moving shadow as a function of its velocity. The pixel brightness provides an upper bound on the Probability of Detection (PD) and a lower bound on the Probability of False Alarm (PFA) for a finite-sized, stationary or moving shadow in the presence of homogeneous, ideal clutter. Synthetic data provides shadow characteristics for a radar scenario that lend themselves for detecting a ground mover. The paper presents 2011-2014 flight data collected by General Atomics Aeronautical Systems, Inc. (GA-ASI).
Interest in the use of active electro-optical(EO) sensors for non-cooperative target identification has steadily increased as the quality and availability of EO sources and detectors have improved. A unique and recent innovation has been the development of an airborne synthetic aperture imaging capability at optical wavelengths. To effectively exploit this new data source for target identification, one must develop an understanding of target-sensor phenomenology at those wavelengths. Current high-frequency, asymptotic EM predictors are computationally intractable for such conditions, as their ray density is inversely proportional to wavelength. As a more efficient alternative, we have developed a geometric optics based simulation for synthetic aperture ladar that seeks to model the second order statistics of the diffuse scattering commonly found at those wavelengths but with much lesser ray density. Code has been developed, ported to high-performance computing environments, and tested on a variety of target models.
Deep architectures for classification and representation learning have recently attracted significant attention within academia and industry, with many impressive results across a diverse collection of problem sets. In this work we consider the specific application of Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) data from the MSTAR public release data set. The classification performance achieved using a Deep Convolutional Neural Network (CNN) on this data set was found to be competitive with existing methods considered to be state-of-the-art. Unlike most existing algorithms, this approach can learn discriminative feature sets directly from training data instead of requiring pre-specification or pre-selection by a human designer. We show how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target and discuss potential practical applications.
We study the problem of target identification from Synthetic Aperture Radar (SAR) imagery. Target classification using SAR imagery is a challenging problem due to large variations of target signature as the target aspect angle changes. Previous work on modeling wide angle SAR imagery has shown that point features, extracted from scattering center locations, result in a high dimensional feature vector that lies on a low dimensional manifold. In this paper we use rich probabilistic models for these target manifolds to analyze classification performance as a function of Signal-to-noise ratio (SNR) and Bandwidth. We employ Mixture of Factor Analyzers (MoFA) models to approximate the target manifold locally, and use error bounds for the estimation and analysis of classification error performance. We compare our performance predictions with the empirical performance of practical classifiers using simulated wideband SAR signatures of civilian vehicles.
Multinomial pattern matching (MPM) is an automatic target recognition algorithm developed for specifically radar data at Sandia National Laboratories. The algorithm is in a family of algorithms that first quantizes pixel value into Nq bins based on pixel amplitude before training and classification. This quantization step reduces the sensitivity of algorithm performance to absolute intensity variation in the data, typical of radar data where signatures exhibit high variation for even small changes in aspect angle. Our previous work has focused on performance analysis of peaky template matching, a special case of MPM where binary quantization is used (Nq = 2). Unfortunately references on these algorithms are generally difficult to locate and here we revisit the MPM algorithm and illustrate the underlying statistical model and decision rules for two algorithm interpretations: the 1-of-K vector form and the scalar. MPM can also be used as a detector and specific attention is given to algorithm tuning where "peak pixels" are chosen based on their underlying empirical probabilities according to a reward minimization strategy aimed at reducing false alarms in the detection scenario and false positives in a classification capacity. The algorithms are demonstrated using Monte Carlo simulations on the AFRL civilian vehicle dataset for variety of choices of Nq.