Proc. SPIE. 8746, Algorithms for Synthetic Aperture Radar Imagery XX
KEYWORDS: MATLAB, Detection and tracking algorithms, Sensors, Synthetic aperture radar, Image processing, Digital filtering, Digital imaging, Signal processing, Data centers, Filtering (signal processing)
Legacy synthetic aperture radar (SAR) exploitation algorithms were image-based algorithms, designed to exploit
complex and/or detected SAR imagery. In order to improve the efficiency of the algorithms, image chips, or region
of interest (ROI) chips, containing candidate targets were extracted. These image chips were then used directly by
exploitation algorithms for the purposes of target discrimination or identification. Recent exploitation research
has suggested that performance can be improved by processing the underlying phase history data instead of
standard SAR imagery. Digital Spotlighting takes the phase history data of a large image and extracts the phase
history data corresponding to a smaller spatial subset of the image. In a typical scenario, this spotlighted phase
history data will contain much fewer samples than the original data but will still result in an alias-free image of
the ROI. The Digital Spotlight algorithm can be considered the first stage in a “two-stage backprojection” image
formation process. As the first stage in two-stage backprojection, Digital Spotlighting filters the original phase
history data into a number of “pseudo”-phase histories that segment the scene into patches, each of which contain
a reduced number of samples compared to the original data. The second stage of the imaging process consists
of standard backprojection. The data rate reduction offered by Digital Spotlighting improves the computational
efficiency of the overall imaging process by significantly reducing the total number of backprojection operations.
This paper describes the Digital Spotlight algorithm in detail and provides an implementation in MATLAB.
This document defines the three pieces of the challenge hierarchy: a challenge area, a data set, and a challenge
problem. The purpose of a challenge problem is to address a technical research and development area of interest
while promoting quantitative comparison between approaches. This paper brings together nine challenge problem
papers written for SAR exploitation and describes them in terms of the challenge hierarchy.
The Dual Format Algorithm (DFA) is an alternative to the Polar Format Algorithm (PFA) where the image is
formed first to an arbitrary grid instead of a Cartesian grid. The arbitrary grid is specifically chosen to allow for
more efficient application of defocus and distortion corrections that occur due to range curvature. We provide
a description of the arbitrary image grid and show that the quadratic phase errors are isolated along a single
dimension of the image. We describe an application of the DFA to circular SAR data and analyze the image
focus. For an example SAR dataset, the DFA doubles the focused image size of the PFA algorithm with post
We consider two problems in this paper. The rst problem is to construct a dictionary of elements without using
synthetic data or a subset of the data collection; the second problem is to estimate the orientation of the vehicle,
independent of the elevation angle. These problems are important to the SAR community because it will alleviate
the cost to create the dictionary and reduce the number of elements in the dictionary needed for classication.
In order to accomplish these tasks, we utilize the glint phenomenology, which is usually viewed as a hindrance
in most algorithms but is valuable information in our research. One way to capitalize on the glint information
is to predict the location of the
int by using geometry of the single and double bounce phenomenology. After
qualitative examination of the results, we were able to deduce that the geometry information was sucient for
accurately predicting the location of the glint. Another way that we exploited the glint characteristics was by
using it to extract the angle feature which we will use to do the pose estimation. Using this technique we were
able to predict the cardinal heading of the vehicle within ±2° with 96:6% having 0° error. Now this research
will have an impact on the classication of SAR images because the geometric prediction will reduce the cost
and time to develop and maintain the database for SAR ATR systems and the pose estimation will reduce the
computational time and improve accuracy of vehicle classication.
An airborne circular synthetic aperture radar system captured data for a 5 km diameter area over 31 orbits.
For this challenge problem, the phase history for 56 targets was extracted from the larger data set and placed
on a DVD for public release. The targets include 33 civilian vehicles of which many are repeated models,
facilitating training and classification experiments. The remaining targets include an open area and 22 reflectors
for scattering and calibration research. The circular synthetic aperture radar provides 360 degrees of azimuth
around each target. For increased elevation content, the collection contains two nine-orbit volumetric series,
where the sensor reduces altitude between each orbit. Researchers are challenged to further the art of focusing,
3D imaging, and target discrimination for circular synthetic aperture radar.
An investigation was made into the feasibility of compressing complex Synthetic Aperture Radar (SAR)
images using MatrixViewTM compression technology to achieve higher compression ratios than
previously achieved. Complex SAR images contain both amplitude and phase information that are
severely degraded with traditional compression techniques. This phase and amplitude information allows
interferometric analysis to detect minute changes between pairs of SAR images, but is highly sensitive to
any degradation in image quality. This sensitivity provides a measure to compare capabilities of different
compression technologies. The interferometric process of Coherent Change Detection (CCD) is acutely
sensitive to any quality loss and, therefore, is a good measure by which to compare compression
capabilities of different technologies. The best compression that could be achieved by block adaptive
quantization (a classical compression approach) applied to a set of I and Q phased-history samples, was a
Compression Ratio (CR) of 2x. Work by Novak and Frost  increased this CR to 3-4x using a more
complex wavelet-based Set Partitioning In Hierarchical Trees (SPIHT) algorithm (similar in its core to
JPEG 2000). In each evaluation as the CR increased, degradation occurred in the reconstituted image
measured by the CCD image coherence. The maximum compression was determined at the point the
CCD image coherence remained > 0.9. The same investigation approach using equivalent sample data
sets was performed using an emerging technology and product called MatrixViewTM. This paper
documents preliminary results of MatrixView's compression of an equivalent data set to demonstrate a
CR of 10-12x with an equivalent CCD coherence level of >0.9: a 300-400% improvement over SPIHT.
The polar format algorithm for monostatic synthetic aperture radar imaging is based on a linear approximation of
the differential range to a scatterer, which leads to spatially-variant distortion and defocus in the resultant image.
While approximate corrections may be applied to compensate for these effects, these corrections are ad-hoc in
nature. Here, we introduce an alternative imaging algorithm called the Dual Format Algorithm (DFA) that
provides better isolation of the defocus effects and reduces distortion. Quadratic phase errors are isolated along
a single dimension by allowing image formation to an arbitrary grid instead of a Cartesian grid. This provides
an opportunity for more efficient phase error corrections. We provide a description of the arbitrary image grid
and we show the quadratic phase error correction derived from a second-order Taylor series approximation of
the differential range. The algorithm is demonstrated with a point target simulation.
While many synthetic aperture radar (SAR) image formation techniques exist, two of the most intuitive methods
for implementation by SAR novices are the matched filter and backprojection algorithms. The matched filter and
(non-optimized) backprojection algorithms are undeniably computationally complex. However, the backprojection
algorithm may be successfully employed for many SAR research endeavors not involving considerably large
data sets and not requiring time-critical image formation. Execution of both image reconstruction algorithms
in MATLAB is explicitly addressed. In particular, a manipulation of the backprojection imaging equations is
supplied to show how common MATLAB functions, ifft and interp1, may be used for straight-forward SAR
image formation. In addition, limits for scene size and pixel spacing are derived to aid in the selection of an
appropriate imaging grid to avoid aliasing. Example SAR images generated though use of the backprojection
algorithm are provided given four publicly available SAR datasets. Finally, MATLAB code for SAR image
reconstruction using the matched filter and backprojection algorithms is provided.
In this work, the problem of detecting and tracking targets with synthetic aperture radars is considered. A
novel approach in which prior knowledge on target motion is assumed to be known for small patches within the
field of view. Probability densities are derived as priors on the moving target signature within backprojected
SAR images, based on the work of Jao.1 Furthermore, detection and tracking algorithms are presented to take
advantage of the derived prior densities. It was found that pure detection suffered from a high false alarm rate
as the number of targets in the scene increased. Thus, tracking algorithms were implemented through a particle
filter based on the Joint Multi-Target Probability Density (JMPD) particle filter2 and the unscented Kalman
filter (UKF)3 that could be used in a track-before-detect scenario. It was found that the PF was superior than
the UKF, and was able to track 5 targets at 0.1 second intervals with a tracking error of 0.20 ± 1.61m (95%
This document describes a challenge problem whose scope is two-fold. The first aspect is to develop SAR CCD
algorithms that are applicable for X-band SAR imagery collected in an urban environment. The second aspect relates to
effective data compression of these complex SAR images, where quality SAR CCD is the metric of performance.
A set of X-band SAR imagery is being provided to support this development. To focus research onto specific areas of
interest to AFRL, a number of challenge problems are defined.
The data provided is complex SAR imagery from an AFRL airborne X-band SAR sensor. Some key features of this data
set are: 10 repeat passes, single phase center, and single polarization (HH). In the scene observed, there are multiple
buildings, vehicles, and trees. Note that the imagery has been coherently aligned to a single reference.
This document describes a challenge problem whose scope is the detection, geolocation, tracking
and ID of moving vehicles from a set of X-band SAR data collected in an urban environment. The
purpose of releasing this Gotcha GMTI Data Set is to provide the community with X-band SAR data
that supports the development of new algorithms for SAR-based GMTI. To focus research onto
specific areas of interest to AFRL, a number of challenge problems are defined.
The data set provided is phase history from an AFRL airborne X-band SAR sensor. Some key
features of this data set are two-pass, three phase center, one-foot range resolution, and one
polarization (HH). In the scene observed, multiple vehicles are driving on roads near buildings.
Ground truth is provided for one of the vehicles.
The polar format algorithm (PFA) is a well known method for forming imagery in both the radar community and the
medical imaging community. PFA is attractive because it has low computational cost, and it partially compensates for
phase errors due to a target's motion through resolution cells (MTRC). Since the imaging scenarios for remote sensing
and medical imaging are traditionally different, the PFA implementation is different between the communities. This
paper describes the differences in PFA implementation. The performance of two illustrative implementations is
compared using synthetic radar and medical imagery.
This paper describes a challenge problem whose scope is the 2D/3D imaging of stationary targets from a volumetric data
set of X-band Synthetic Aperture Radar (SAR) data collected in an urban environment. The data for this problem was
collected at a scene consisting of numerous civilian vehicles and calibration targets. The radar operated in circular SAR
mode and completed 8 circular flight paths around the scene with varying altitudes. Data consists of phase history data,
auxiliary data, processing algorithms, processed images, as well as ground truth data. Interest is focused on mitigating
the large side lobes in the point spread function. Due to the sparse nature of the elevation aperture, traditional imaging
techniques introduce excessive artifacts in the processed images. Further interests include the formation of highresolution
3D SAR images with single pass data and feature extraction for 3D SAR automatic target recognition
applications. The purpose of releasing the Gotcha Volumetric SAR Data Set is to provide the community with X-band
SAR data that supports the development of new algorithms for high-resolution 2D/3D imaging.
It has recently become apparent that dismount tracking from non-EO based sources will have a
large positive impact on urban operations. EO / camera imaging is subject to line of site and
weather conditions which makes it a non-robust source for dismount tracking. Other sensors
exist (e.g. radar) to track dismount targets; however, little radar dismount data exists. This paper
examines the capability to generate synthetic and measured dismount data sets for radar
frequency (RF) processing. For synthetic data, we used the PoserTM program to generate 500
facet models of human dismount walking. Then we used these facet models with Xpatch to
generate synthetic wideband radar data. For measured dismount data, we used a multimode (X-Band
and Ku-Band) radar system to collect RF data of volunteer human (dismount) targets.
The convolution backprojection algorithm is an accurate synthetic aperture radar imaging technique, but it has seen limited use in the radar community due to its high computational costs. Therefore, significant research has been conducted for a fast backprojection algorithm, which surrenders some image quality for increased computational efficiency. This paper describes an implementation of both a standard convolution backprojection algorithm and a fast backprojection algorithm optimized for use on a Linux cluster and a field-programmable gate array (FPGA) based processing system. The performance of the different implementations is compared using synthetic ideal point targets and the SPIE XPatch Backhoe dataset.
ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target.
This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.
A unified way of detecting and tracking moving targets with a SAR radar called SAR-MTI is presented. SAR-MTI differs from STAP or DPCA in that it is a generalization of SAR processing and can work with only a single phase center. SAR-MTI requires formation of a series of images assuming different sensor ground speeds, from vs-vtmax to vs+vtmax, where vs is the actual sensor ground speed and vtmax is the maximum target speed of interest. Each image will capture a different set of target velocities, and the complete set of images will focus all target speeds less than a desired maximum speed regardless of direction and target location. Thus the 2-dimensional SAR image is generalized to a 3-dimensional cube or stack of images. All linear moving targets less than the desired speed will be focused somewhere in the cube. The third dimension represents the along track velocity of the mover which is a piece of information not available to standard airborne MTI. A mover will remain focused at the same place within the cube as long as the motion of the mover and the sensor remain linear. Because stationary targets also focus within the detection cube, move-stop-move targets are handled smoothly and without changing waveforms or modes. Another result of this fact is that SAR-MTI has no minimum detectable velocity.
SAR-MTI has an inherent ambiguity because the four-dimensions of target parameters (two dimensions in both velocity and position) are mapped into a three-dimensional detection space. This ambiguity is characterized and methods for resolving the ambiguity for geolocation are discussed. The point spread function in the detection cube is also described.
Our proposed research is to focus and geolocate moving targets in synthetic aperture radar imagery. The first step is to estimate the target cross-range velocity using sequential sub-apertures; this is done by forming low resolution images and estimating position as a function of sub-aperture, thus yielding an estimate of the cross-range velocity. This cross-range estimate is then used to bound the search range for a bank of focusing filters. Determining the proper velocity that yields the best focused target defines an equation for the target velocity, however both components of the targets velocity can not be determined from a single equation. Therefore, a second image with a slightly different heading is needed to yield a second focusing velocity, and then having a system of two equations and two unknowns a solution can be obtained. Once the target velocity is known the proper position can be determined from the range velocity.