This study investigates vehicle detection performance for an airborne video SAR over increasingly wide collection angles. Using a parameterized video SAR generation, it is possible to see the effects of detection on various frame generation methods and apertures. A cell-averaging CFAR detection algorithm is applied to each frame, searching for vehicles. Results show that increasing the azimuth extent of the video SAR improves detection performance. With some imaging methods, a larger azimuth extent helps to suppress background speckle. Also, a larger azimuth extent improves the chances of imaging a vehicle from a more informative cardinal angle.
Proc. SPIE. 8746, Algorithms for Synthetic Aperture Radar Imagery XX
KEYWORDS: MATLAB, Detection and tracking algorithms, Sensors, Synthetic aperture radar, Image processing, Digital filtering, Digital imaging, Signal processing, Data centers, Filtering (signal processing)
Legacy synthetic aperture radar (SAR) exploitation algorithms were image-based algorithms, designed to exploit
complex and/or detected SAR imagery. In order to improve the efficiency of the algorithms, image chips, or region
of interest (ROI) chips, containing candidate targets were extracted. These image chips were then used directly by
exploitation algorithms for the purposes of target discrimination or identification. Recent exploitation research
has suggested that performance can be improved by processing the underlying phase history data instead of
standard SAR imagery. Digital Spotlighting takes the phase history data of a large image and extracts the phase
history data corresponding to a smaller spatial subset of the image. In a typical scenario, this spotlighted phase
history data will contain much fewer samples than the original data but will still result in an alias-free image of
the ROI. The Digital Spotlight algorithm can be considered the first stage in a “two-stage backprojection” image
formation process. As the first stage in two-stage backprojection, Digital Spotlighting filters the original phase
history data into a number of “pseudo”-phase histories that segment the scene into patches, each of which contain
a reduced number of samples compared to the original data. The second stage of the imaging process consists
of standard backprojection. The data rate reduction offered by Digital Spotlighting improves the computational
efficiency of the overall imaging process by significantly reducing the total number of backprojection operations.
This paper describes the Digital Spotlight algorithm in detail and provides an implementation in MATLAB.
A recent circular synthetic aperture radar data collection contained various vehicles and calibration targets placed
throughout a 5 km scene. By observing multiple orbits of the radar, the down-range distance measurements
to scattering features show noticeable drift on the order of 2 m from orbit to orbit. The large scene contained
14 quad-trihedral calibration targets with radar cross sections that are similar to point targets in the elevation
range of the scene. This paper presents an algorithm that uses the quad-trihedrals to generate global range
focusing parameters and phase error corrections to the complex range profile. Qualitative and quantitative
results show the focusing provides a significant improvement to wide-angle image registration and vehicle target
An airborne circular synthetic aperture radar system captured data for a 5 km diameter area over 31 orbits.
For this challenge problem, the phase history for 56 targets was extracted from the larger data set and placed
on a DVD for public release. The targets include 33 civilian vehicles of which many are repeated models,
facilitating training and classification experiments. The remaining targets include an open area and 22 reflectors
for scattering and calibration research. The circular synthetic aperture radar provides 360 degrees of azimuth
around each target. For increased elevation content, the collection contains two nine-orbit volumetric series,
where the sensor reduces altitude between each orbit. Researchers are challenged to further the art of focusing,
3D imaging, and target discrimination for circular synthetic aperture radar.
We present a fast, scalable method to simultaneously register and classify vehicles in circular synthetic aperture
radar imagery. The method is robust to clutter, occlusions, and partial matches. Images are represented as a
set of attributed scattering centers that are mapped to local sets, which are invariant to rigid transformations.
Similarity between local sets is measured using a method called pyramid match hashing, which applies a pyramid
match kernel to compare sets and a Hamming distance to compare hash codes generated from those sets. By
preprocessing a database into a Hamming space, we are able to quickly find the nearest neighbor of a query
among a large number of records. To demonstrate the algorithm, we simulated X-band scattering from ten
civilian vehicles placed throughout a large scene, varying elevation angles in the 35 to 59 degree range. We
achieved better than 98 percent classification performance. We also classified seven vehicles in a 2006 public
release data collection with 100% success.
We present a set of simulated X-band scattering data for civilian vehicles. For ten facet models of civilian
vehicles, a high-frequency electromagnetic simulation produced fully polarized, far-field, monostatic scattering
for 360 degrees azimuth and elevation angles from 30 to 60 degrees. The 369 GB of phase history data is stored
in a MATLAB file format. This paper describes the CVDomes data set along with example imagery using 2D
backprojection, single pass 3D, and multi-pass 3D.
In this paper we consider classification of civilian vehicles using circular synthetic aperture radar. For wide-field
application in which the scene radius is a significant fraction of the flight path radius, vehicle signatures
are spatially variant due to layover. For a ten-class identification task using simulated X-band signatures, we
demonstrate 96% correct classification for single-pass 2D imagery with scene radius 0.4 times the flight radius.
Simulated scattering data include multi-path and material effects. Image signatures are represented by sets of
attributed scattering centers. Dissimilarity between attributed point sets is computed via a minimized partial
Hausdorff distance. Using multidimensional scaling, the distances are represented in a low-dimensional Euclidean
space for both visualization and improved classification. The minimized partial Hausdorff distance, while not
a true distance, empirically shows remarkable fidelity to the triangle inequality. Finally, in a limited two-class
study, we show that three-dimensional imaging of layover points using polarization cues provides improved class
At high frequencies, synthetic aperture radar (SAR) imagery can be represented as a set of points corresponding
to scattering centers. Using a collection of sequential azimuths with a fixed aperture we build a cube of points for
each of seven civilian vehicles in the Gotcha public release data set (GPRD). We present a baseline study of the
ability to discriminate between the vehicles using strictly 2D geometric information of the scattering centers. The
comparison algorithm is independent of pose and translation using a novel application of the partial Hausdorff
distance (PHD) minimized through a particle swarm optimization. Using the PHD has the added benefit of
reducing the effects of occlusions and clutter in comparing vehicles from pass to pass. We provide confusion
matrices for a variety of operating parameters including azimuth extent, various amplitude cutoffs, and various
parameters within PHD. Finally, we discuss extension of the approach to near-field imaging and to additional
point attributes, such as 3D location and polarimetric response.