This PDF file contains the front matter associated with SPIE Proceedings Volume 7337, including the Title Page, Copyright information, Tabe of Contents, Introduction (if any), and the Conference Committee listing.
This paper demonstrates image enhancement for wide-angle, multi-pass three-dimensional SAR applications.
Without sufficient regularization, three-dimensional sparse-aperture imaging from realistic data-collection scenarios
results in poor quality, low-resolution images. Sparsity-based image enhancement techniques may be used
to resolve high-amplitude features in limited aspects of multi-pass imagery. Fusion of the enhanced images across
multiple aspects in an approximate GLRT scheme results in a more informative view of the target. In this paper,
we apply two sparse reconstruction techniques to measured data of a calibration top-hat and of a civilian vehicle
observed in the AFRL publicly-released 2006 Circular SAR data set. First, we employ prominent-point autofocus
in order to compensate for unknown platform motion and phase errors across multiple radar passes. Each
sub-aperture of the autofocused phase history is digitally-spotlighted (spatially low-pass filtered) to eliminate
contributions to the data due to features outside the region of interest, and then imaged with l1-regularized
least squares and CoSaMP. The resulting sparse sub-aperture images are non-coherently combined to obtain a
wide-angle, enhanced view of the target.
Typically in SAR imaging, there is insufficient data to form well-resolved three-dimensional (3D) images using
traditional Fourier image reconstruction; furthermore, scattering centers do not persist over wide-angles. In
this work, we examine 3D non-coherent wide-angle imaging on the GOTCHA Air Force Research Laboratory
(AFRL) data set; this data set consists of multipass complete circular aperture radar data from a scene at AFRL,
with each pass varying in elevation as a result of aircraft flight dynamics . We compare two algorithms capable
of forming well-resolved 3D images over this data set: regularized lp least-squares inversion, and non-uniform
multipass interferometric SAR (IFSAR).
Sparse data collection geometries represent a significant challenge to high-resolution three-dimensional SAR
imaging. In particular, sparse sampling can lead to significant sidelobe structure in high-resolution reconstructions.
To help constrain volumetric SAR reconstructions, we have introduced a surface model. We justify this
based on physical phenomenology: higher frequency SAR systems exhibit only limited surface penetration. In
our method, we jointly estimate the surface models and reconstructions, significantly reducing sidelobing artifacts
in comparison with traditional reconstructions. Our paper and presentation illustrate reconstructions both
with and without surface models to demonstrate the potential improvement.
In this paper we study the impact of sparse aperture data collection of a SAR sensor on reconstruction quality of
a scene of interest. Different mono and multi-static SAR measurement configurations produce different Fourier
sampling patterns. These patterns reflect different spectral and spatial diversity trade-offs that must be made
during task planning. Compressed sensing theory argues that the mutual coherence of the measurement probes
is related to the reconstruction performance of sparse domains. With this motivation we compare the mutual
coherence and corresponding reconstruction behavior of various mono-static and ultra-narrow band multi-static
configurations, which trade-off frequency for geometric diversity. We investigate if such simple metrics are related
to SAR reconstruction quality in an obvious way.
We consider sidelobe reduction and resolution enhancement in synthetic aperture radar (SAR) imaging via an
iterative adaptive approach (IAA) and a sparse Bayesian learning (SBL) method. The nonparametric weighted
least squares based IAA algorithm is a robust and user parameter-free adaptive approach originally proposed
for array processing. We show that it can be used to form enhanced SAR images as well. SBL has been used as
a sparse signal recovery algorithm for compressed sensing. It has been shown in the literature that SBL is easy
to use and can recover sparse signals more accurately than the l 1 based optimization approaches, which require
delicate choice of the user parameter. We consider using a modified expectation maximization (EM) based SBL
algorithm, referred to as SBL-1, which is based on a three-stage hierarchical Bayesian model. SBL-1 is not only
more accurate than benchmark SBL algorithms, but also converges faster. SBL-1 is used to further enhance
the resolution of the SAR images formed by IAA. Both IAA and SBL-1 are shown to be effective, requiring
only a limited number of iterations, and have no need for polar-to-Cartesian interpolation of the SAR collected
data. This paper characterizes the achievable performance of these two approaches by processing the complex
backscatter data from both a sparse case study and a backhoe vehicle in free space with different aperture sizes.
The Chirp-Scaling Algorithm (CSA) is one of the most widely used synthetic aperture radar (SAR) image
reconstruction method. However, its applicability is limited to straight flight trajectories and monostatic SAR.
We present a new mathematical treatment of the CSA from the perspective of Fourier Integral Operators theory.
Our treatment leads to a chirp-scaling-based true amplitude imaging algorithm, which places the visible edges of
the scene at the correct locations and directions with the correct strength. Furthermore, it provides a framework
for the extension of the chirp-scaling based approach to non-ideal imaging scenarios as well as other SAR imaging
modalities such as bistatic-SAR and hitchhiker-SAR.
Synthetic Aperture Radar (SAR) image processing platforms have to process increasingly large datasets under and hard
real-time deadlines. Upgrading these platforms is expensive. An attractive solution to this problem is to couple high
performance, general-purpose Commercial-Off-The-Shelf (COTS) architectures such as IBM's Cell BE and Intel's Core
with software implementations of SAR algorithms. While this approach provides great flexibility, achieving the requisite
performance is difficult and time-consuming. The reason is the highly parallel nature and general complexity of modern
COTS microarchitectures. To achieve the best performance, developers have to interweave of various complex optimizations
including multithreading, the use of SIMD vector extensions, and careful tuning to the memory hierarchy. In this
paper, we demonstrate the computer generation of high performance code for SAR implementations on Intel's multicore
platforms based on the Spiral framework and system. The key is to express SAR and its building blocks in Spiral's formal
domain-specific language to enable automatic vectorization, parallelization, and memory hierarchy tuning through rewriting
at a high abstraction level and automatic exploration of choices. We show that Spiral produces code for the latest Intel
quadcore platforms that surpasses competing hand-tuned implementations on the Cell Blade, an architecture with twice as
many cores and three times the memory bandwidth. Specifically, we show an average performance of 39 Gigaflops/sec for
16-Megapixel and 100-Megapixel SAR images with runtimes of 0.56 and 3.76 seconds respectively.
In recent papers the authors discussed the advantages of forming spotlight-mode SAR imagery from phase history data
via a technique that is rooted in the principles of phased-array beamforming, which is closely related to back-projection.
The application of a traditional autofocus algorithm, such as Phase Gradient Autofocus (PGA), requires some care in this
situation. Specifically, a stated advantage of beamforming is that it easily allows for reconstruction of the SAR image
onto an arbitrary imaging grid. One very useful grid, for example, is a Cartesian grid in the ground plane. Autofocus via
PGA for such an image, however, cannot be performed in a straightforward manner, because in PGA a Fourier transform
relationship is required between the image domain and the range-compressed phase history, and this is not the case for
such an imaging grid. In this paper we propose a strategy for performing autofocus in this situation, and discuss its
limitations. We demonstrate the algorithm on synthetic phase errors applied to real SAR imagery.
The image focus quality of multiple-pass 3D SAR imagery strongly depends on the accuracy of the between-pass
coherent image alignment. We present a method for forming a 3D SAR image from multiple coherently
aligned 2D SAR images collected at different viewing geometries, where the joint coherent alignment is performed
by directly optimizing the 3D SAR image entropy. Image entropy based focusing is inherently data-adaptive
and imposes only minimal assumptions about the SAR imagery such as it having a non-Gaussian distribution,
as opposed to interferometric based methods which typically rely on the assumption of one or a few bright
scatterers per resolution cell. We will show examples of coherently aligning and focusing 3D SAR imagery using
both simulated as well as measured multiple-pass SAR imagery.
Regularization based image reconstruction algorithms have successfully been applied to the synthetic aperture
radar (SAR) imaging problem. Such algorithms assume that the mathematical model of the imaging
system is perfectly known. However, in practice, it is very common to encounter various types of model
errors. One predominant example is phase errors which appear either due to inexact measurement of the
location of the SAR sensing platform, or due to effects of propagation through atmospheric turbulence. We
propose a nonquadratic regularization-based framework for joint image formation and model error correction.
This framework leads to an iterative algorithm, which cycles through steps of image formation and
model parameter estimation. This approach offers advantages over autofocus techniques that involve post-processing
of a conventionally formed image. We present results on synthetic scenes, as well as the Air Force
Research Labarotory (AFRL) Backhoe data set, demonstrating the effectiveness of the proposed approach.
A novel fractional time-shift operator is presented in this paper. It is based on performing a phase shift in the frequency
domain, which, of course, corresponds to the desired time shift in the time domain. The operations of transforming to
the frequency domain, multiplying by the phase shift, and transforming back to the time domain can be accomplished
efficiently using one matrix. This matrix is Toeplitz with terms away from the shift number near zero. By
approximating the near zero terms as zero, a banded or truncated Toeplitz matrix results. This reduces the computational
load and allows an FIR filter realization for the fractional time shift.
Synthetic aperture radar (SAR) images exhibit a fundamental inverse relationship between image quality and
collection range: various metrics and visual inspection clearly indicate that SAR image quality deteriorates as
collection range increases. Standoff constraints typically dictate long-range imaging geometries for operational
use of fielded SAR sensors. At the same time, system validation and data volume considerations typically dictate
short-range imaging geometries for non-operational SAR data collections. This presents a conundrum for the
developers of SAR exploitation applications: despite the fact that a sensor may be used exclusively at long
ranges in operational settings, most or all of the data available for application development and testing may
have been collected at short range. The lack of long-range imagery for development and testing can lead to a
variety of problems, potentially including not only poor robustness to range-induced image-quality degradation,
but even total failure if longer-range imagery invalidates fundamental algorithmic assumptions. We propose
a method for simulating the effects of longer-range collection using shorter-range SAR images. This method
incorporates the predominant contributing factors to range-induced image-quality degradation, including various
signal-attenuation and aperture-decoherence effects. We present examples demonstrating our approach.
Synthetic aperture radar (SAR) is a popular tool for long-range imaging of stationary ground objects. Moving
targets in the imaged scene will have a mismatch to the matched filter in the image formation process, thus
degrading target image quality. In this paper, the impact of uncompensated target motion in SAR imagery is
studied in detail. Bounds on allowable target rotation, and random and deterministic translation are derived to
maintain image interpretability.
This document describes a challenge problem whose scope is the detection, geolocation, tracking
and ID of moving vehicles from a set of X-band SAR data collected in an urban environment. The
purpose of releasing this Gotcha GMTI Data Set is to provide the community with X-band SAR data
that supports the development of new algorithms for SAR-based GMTI. To focus research onto
specific areas of interest to AFRL, a number of challenge problems are defined.
The data set provided is phase history from an AFRL airborne X-band SAR sensor. Some key
features of this data set are two-pass, three phase center, one-foot range resolution, and one
polarization (HH). In the scene observed, multiple vehicles are driving on roads near buildings.
Ground truth is provided for one of the vehicles.
We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse
3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical
features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear
feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain
sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for
scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive,
is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical
scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene
for both a densely-sampled and a sparsely-sampled SAR measurement aperture.
Non-quadratic regularization based image formation is a recently proposed framework for feature-enhanced radar
imaging. Specific image formation techniques in this framework have so far focused on enhancing one type of feature,
such as strong point scatterers, or smooth regions. However, many scenes contain a number of such features. We develop
an image formation technique that simultaneously enhances multiple types of features by posing the problem as one of
sparse signal representation based on overcomplete dictionaries. Due to the complex-valued nature of the reflectivities in
SAR, our new approach is designed to sparsely represent the magnitude of the complex-valued scattered field in terms of
multiple features, which turns the image reconstruction problem into a joint optimization problem over the representation
of the magnitude and the phase of the underlying field reflectivities. We formulate the mathematical framework needed
for this method and propose an iterative solution for the corresponding joint optimization problem. We demonstrate the
effectiveness of this approach on various SAR images.
In radar imaging, for example Inverse Synthetic Aperture Radar (ISAR) imaging, a target can be modeled as a collection
of scattering centers in the image domain. A method to improve radar image quality through clutter suppression and
localization of scattering centers is presented in this paper. The approach is based on localizing the scattering centers by
enforcing sparsity constraints through random compressive sampling of the measured data. Sparsity constraint ratio is
chosen as a design parameter to achieve the objective. Results show that significant clutter reduction and improvement in
localization of scattering centers are achieved at an optimum sparsity constraint ratio.
Multiple scattering and random interactions among scattering elements and between the scatterers and the background
adversely affect the radar image quality and target detection capability. In the radar image, multiple scattering and
interactions appear as non-physical scattering centers. A method to improve the performance of radar imaging systems
by extracting independent scattering centers is investigated in this study. Independent Component Analysis (ICA) is
applied to returns of a radar system to extract independent scattering centers based on their non-Gaussianity. As an
example, this method of target extraction is implemented in inverse synthetic aperture radar (ISAR) imaging of closely-spaced
targets. Results of this study show that the application of this radar signal processing technique has allowed
extraction of independent scattering centers which are needed in target detection and identification.
Proc. SPIE 7337, The spectrum parted linked image test (SPLIT) algorithm for estimating the frequency dependence of scattering center amplitudes, 73370L (29 April 2009); https://doi.org/10.1117/12.819329
This paper presents an algorithm for estimating the frequency dependence of scattering center amplitudes. The
spectrum parted linked image test (SPLIT) algorithm compares the amplitude peaks in two images formed by
splitting the radar signal bandwidth into two parts. The theoretical basis for the algorithm lies in the Geometrical
Theory of Diffraction. This theoretical basis, a description of the algorithm, and experimental results are provided.
The authors recommend the use of this algorithm for high frequency (> 5 GHz), wide-band (> 1 GHz) applications
involving target detection and recognition.
Existing SAR ATR systems are usually trained off-line with samples of target imagery or CAD models, prior to
conducting a mission. If the training data is not representative of mission conditions, then poor performance may result.
In addition, it is difficult to acquire suitable training data for the many target types of interest. The Adaptive SAR ATR
Problem Set (AdaptSAPS) program provides a MATLAB framework and image database for developing systems that
adapt to mission conditions, meaning less reliance on accurate training data. A key function of an adaptive system is the
ability to utilise truth feedback to improve performance, and it is this feature which AdaptSAPS is intended to exploit.
This paper presents a new method for SAR ATR that does not use training data, based on supervised learning. This is
achieved by using feature-based classification, and several new shadow features have been developed for this purpose.
These features allow discrimination of vehicles from clutter, and classification of vehicles into two classes: targets,
comprising military combat types, and non-targets, comprising bulldozers and trucks. The performance of the system is
assessed using three baseline missions provided with AdaptSAPS, as well as three additional missions. All performance
metrics indicate a distinct learning trend over the course of a mission, with most third and fourth quartile performance
levels exceeding 85% correct classification. It has been demonstrated that these performance levels can be maintained
even when truth feedback rates are reduced by up to 55% over the course of a mission.
The ability to assess potential automatic target recognition (ATR) performance for a given SAR system, target set and
clutter environment is a key requirement for system procurement and mission planning. A cost-effective solution is to
develop a theoretical model which can provide ATR performance predictions given a parameterisation of the system,
targets and environment. In this paper, a classification scheme based on shadow information is analysed. Consideration
of the statistical accuracy of shadow-based features allows ATR performance to be predicted. Quantitative comparisons
of predicted performance with results obtained via simulation as well as against real data from the MSTAR data set are
presented. It is seen that a reasonable level of agreement is obtained which gives confidence in extending the theoretical
concepts to more complex feature-based ATR schemes.
This paper addresses several fundamental problems that have hindered the development of model-based recognition
systems: (a) The feature-correspondence problem whose complexity grows exponentially with the number
of image points versus model points, (b) The restriction of matching image data points to a point-based model
(e.g., point based features), and (c) The local versus global minima issue associated with using an optimization
Using a convex hull representation for the surfaces of an object, common in CAD models, allows generalizing
the point-to-point matching problem to a point-to-surface matching problem. A discretization of the Euclidean
transformation variables and use of the well known assignment model of Linear Programming renown leads to
a multilinear programming problem. Using a logarithmic/exponential transformation employed in geometric
programming this nonconvex optimization problem can be transformed into a difference of convex functions
(DC) optimization problem which can be solved using a DC programming algorithm.
In this paper, a novel descriptive feature parameter extraction method from synthetic aperture radar (SAR) images is
proposed. The new approach is based on region covariance (RC) method which involves the computation of a covariance
matrix whose entries are used in target detection and classification. In addition the region co-difference matrix is also
introduced. Experimental results of object detection in MSTAR (moving and stationary target recognition) database are
presented. The RC and region co-difference method delivers high detection accuracy and low false alarm rates. It is also
experimentally observed that these methods produce better results than the commonly used principal component analysis
(PCA) method when they are used with different distance metrics introduced.
Performance of Automatic Target Recognition (ATR) algorithms for Synthetic Aperture Radar (SAR) systems relies
heavily on the system performance and specifications of the SAR sensor. A representative multi-stage SAR ATR
algorithm [1, 2] is analyzed across imagery containing phase errors in the down-range direction induced during the
transmission of the radar's waveform. The degradation induced on the SAR imagery by the phase errors is
measured in terms of peak phase error, Root-Mean-Square (RMS) phase error, and multiplicative noise. The ATR
algorithm consists of three stages: a two-parameter CFAR, a discrimination stage to reduce false alarms, and a
classification stage to identify targets in the scene. The end-to-end performance of the ATR algorithm is quantified
as a function of the multiplicative noise present in the SAR imagery through Receiver Operating Characteristic
(ROC) curves. Results indicate that the performance of the ATR algorithm presented is robust over a 3dB change in
In this paper we consider classification of civilian vehicles using circular synthetic aperture radar. For wide-field
application in which the scene radius is a significant fraction of the flight path radius, vehicle signatures
are spatially variant due to layover. For a ten-class identification task using simulated X-band signatures, we
demonstrate 96% correct classification for single-pass 2D imagery with scene radius 0.4 times the flight radius.
Simulated scattering data include multi-path and material effects. Image signatures are represented by sets of
attributed scattering centers. Dissimilarity between attributed point sets is computed via a minimized partial
Hausdorff distance. Using multidimensional scaling, the distances are represented in a low-dimensional Euclidean
space for both visualization and improved classification. The minimized partial Hausdorff distance, while not
a true distance, empirically shows remarkable fidelity to the triangle inequality. Finally, in a limited two-class
study, we show that three-dimensional imaging of layover points using polarization cues provides improved class
The performance of coherent (and non-coherent) change detection algorithms is evaluated using complex SAR data that
have been processed with various data compression approaches; the hope is that it may be possible to achieve higher
compression ratios than could be achieved using classical image compression approaches such as BAQ (block adaptive
quantization). BAQ compression is typically applied to raw (I,Q) SAR phase-history data, and our studies show that to
obtain reasonably good coherent change detection (CCD) performance from a baseline CCD algorithm, BAQ compression
requires at least 4-bit quantization for each of the I and Q phase-history data samples; since our original full-precision
data is 8-bits I and 8-bits Q, the best compression ratio (CR) that could be achieved using BAQ compression
was a factor of 2. Our goal is to increase the amount of compression while achieving the same quality of change detection
using more sophisticated wavelet-based approaches such as compressive sensing or set partitioning (SPIHT). This
paper demonstrates a wavelet-based compressive sensing approach that gives CR = 3 with comparable CCD performance;
we also demonstrate a wavelet-based SPIHT approach that gives CR = 4 with comparable CCD performance.