This paper describes a challenge problem whose scope is the 2D/3D imaging of stationary targets from a volumetric data
set of X-band Synthetic Aperture Radar (SAR) data collected in an urban environment. The data for this problem was
collected at a scene consisting of numerous civilian vehicles and calibration targets. The radar operated in circular SAR
mode and completed 8 circular flight paths around the scene with varying altitudes. Data consists of phase history data,
auxiliary data, processing algorithms, processed images, as well as ground truth data. Interest is focused on mitigating
the large side lobes in the point spread function. Due to the sparse nature of the elevation aperture, traditional imaging
techniques introduce excessive artifacts in the processed images. Further interests include the formation of highresolution
3D SAR images with single pass data and feature extraction for 3D SAR automatic target recognition
applications. The purpose of releasing the Gotcha Volumetric SAR Data Set is to provide the community with X-band
SAR data that supports the development of new algorithms for high-resolution 2D/3D imaging.
It has recently become apparent that dismount tracking from non-EO based sources will have a
large positive impact on urban operations. EO / camera imaging is subject to line of site and
weather conditions which makes it a non-robust source for dismount tracking. Other sensors
exist (e.g. radar) to track dismount targets; however, little radar dismount data exists. This paper
examines the capability to generate synthetic and measured dismount data sets for radar
frequency (RF) processing. For synthetic data, we used the Poser<sup>TM</sup> program to generate 500
facet models of human dismount walking. Then we used these facet models with Xpatch to
generate synthetic wideband radar data. For measured dismount data, we used a multimode (X-Band
and Ku-Band) radar system to collect RF data of volunteer human (dismount) targets.
Proc. SPIE. 5808, Algorithms for Synthetic Aperture Radar Imagery XII
KEYWORDS: Radar, Detection and tracking algorithms, Visualization, Sensors, Synthetic aperture radar, 3D modeling, Data archive systems, Data processing, Automatic target recognition, Algorithm development
Having relevant sensor data available during the early phases of ATR algorithm development and evaluation projects is paramount. The source of this data primarily comes from either being synthetically-generated or from measured collections. These collections, in turn, can either be highly-controlled or operational-like exercises. This paper presents a broad overview on the types of data being housed within the Automatic Target Recognition Division of the Air Force Research Laboratory (AFRL/SNA) and that are available to the ATR developer.
ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target.
This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.
Visual-D is a 2004 DARPA/IXO seedling effort that would develop a capability for reliable high confidence ID from standoff ranges. Being able to form optical-quality SAR images (exploiting full polarization, wide angle, etc) would key evidence that such a capability is achievable. The seedling team produced a public release data set and associated challenge problems to support community research in this area. The premise of this paper is to describe the full data set and 3 associated challenge problems that are defined over interesting subsets of the full data set.