Typical ATR performance metrics are based on the results of empirical studies on truthed datasets where it is difficult to fully sample the space of expected variation yielding potentially false generalizations of empirical performance results to a rigorous performance assessment. This is especially difficult when many sources of variation can exist in the data, typically referred to as operating conditions. Here, we propose a general method to analytically predict the classification performance of the MPM algorithm when samples are assumed realizations of two separate MPM template parametrizations differing as a function of a single, conditionally independent operation condition. This performance prediction approach is then used to investigate the role the ideal point response has in the classification performance of synthetic aperture radar targets. The exact trade-off we study is coherently processing an aperture to yield a single higher resolution image versus non-coherently processing the aperture to yield multiple lower resolution looks of a scene. Experiments are conducted using SAR imagery from the Air Force Research Laboratories Civilian Vehicle dataset. An additional performance analysis presents an analytic approach to predict algorithm performance under additive white Gaussian noise for a general Nq allowing the performance loss under IPR variations to be mapped to an equivalent loss in signal-to-noise ratio.
Peaky template matching (PTM) is a special case of a general algorithm known as multinomial pattern matching originally developed for automatic target recognition of synthetic aperture radar data. The algorithm is a model- based approach that first quantizes pixel values into Nq = 2 discrete values yielding generative Beta-Bernoulli models as class-conditional templates. Here, we consider the case of classification of target chips in AWGN and develop approximations to image-to-template classification performance as a function of the noise power. We focus specifically on the case of a uniform quantization" scheme, where a fixed number of the largest pixels are quantized high as opposed to using a fixed threshold. This quantization method reduces sensitivity to the scaling of pixel intensities and quantization in general reduces sensitivity to various nuisance parameters difficult to account for a priori. Our performance expressions are verified using forward-looking infrared imagery from the Army Research Laboratory Comanche dataset.
Multinomial pattern matching (MPM) is an automatic target recognition algorithm developed for specifically radar data at Sandia National Laboratories. The algorithm is in a family of algorithms that first quantizes pixel value into Nq bins based on pixel amplitude before training and classification. This quantization step reduces the sensitivity of algorithm performance to absolute intensity variation in the data, typical of radar data where signatures exhibit high variation for even small changes in aspect angle. Our previous work has focused on performance analysis of peaky template matching, a special case of MPM where binary quantization is used (Nq = 2). Unfortunately references on these algorithms are generally difficult to locate and here we revisit the MPM algorithm and illustrate the underlying statistical model and decision rules for two algorithm interpretations: the 1-of-K vector form and the scalar. MPM can also be used as a detector and specific attention is given to algorithm tuning where "peak pixels" are chosen based on their underlying empirical probabilities according to a reward minimization strategy aimed at reducing false alarms in the detection scenario and false positives in a classification capacity. The algorithms are demonstrated using Monte Carlo simulations on the AFRL civilian vehicle dataset for variety of choices of Nq.
Synthetic aperture radar (SAR) imaging is a powerful tool that can be utilized where other conventional surveillance
methods fail. It has a variety of applications including reconnaissance and surveillance for defense purposes,
natural resource exploration, and environmental monitoring, among others. SAR systems generally create large
datasets that need to be processed to form a final image. Processing this data can be computationally intensive,
and applications may demand algorithms that can form images quickly. The goal and motivation of this research
is to analyze algorithms that permit a large SAR dataset to be efficiently processed into a high-resolution image
of a large scene.
The backprojection algorithm (BPA)1 can serve as a baseline for performance relative to other SAR imaging
algorithms. It results in accurately formed images for a vast variety of imaging scenarios. The tradeoff comes in
its computational complexity which is O(N3) for an N × N pixel image. The polar format algorithm (PFA)2 is
a long-standing and popular alternative to the BPA. The PFA allows the use of fast Fourier Transforms (FFTs),
leading to a computational complexity of O(N2 logN) for an N × N pixel image. However, the PFA relies on
a far-field approximation, wherein the curved wavefront of the transmitted pulses is approximated as a planar
wavefront, thereby introducing spatially variant phase errors and hence distortion and defocus in the PFA formed
image. The defocus and distortion errors can be corrected, but this is a non-trivial process.3
It can be shown that first-order Taylor expansion of a differential range expression yields the assumed received
signal phase used to generate images from SAR phase history data with the PFA.4 This work focuses on error
terms introduced by the PFA assumption that introduce geometric distortion in the resulting image. This
distortion causes a point scatterer located at a true (x, y) coordinate to appear at some (x, y) in the formed
image, i.e, unwanted translation of point target locations is introduced. Complicating matters, the distortion is
a function of a pixel's coordinates in the scene, thus making the distortion spatially-variant such that each pixel
will be distorted differently. This is often referred to as an image warping.
Previously, it has been assumed that the second-order Taylor series of the differential range defines the
dominant error,2, 4, 5 due to the factorial decay of the Taylor series. This assumption is tested here by performing
a Taylor expansion on a differential range error expression. Instead of assuming the second-order differential
range expansion term to be the sole source of error, the true error term is used to approximate the distortion.
The results of this comparison are presented. The differential range error approach will be referred to as the
DRE approach and the dominant polynomial approach as the DPE.
Additionally, with an accurate distortion approximation, it has been shown that the distortion can be removed
in post-processing.3 With this in mind, bounds on scene size are derived limiting the visible distortion to within
an arbitary number of resolution cells, both before and after the second-order distortion correction. These bounds
are also verified in simulation.
The paper is outlined as follows. In Section 2, we will first introduce the differential range term and demonstrate
its relationship to the PFA imaging kernel and the source of the phase error terms. Next in Section 3, the
distortion functions will be derived from these error terms using both the DRE and DPE approaches before and
after applying the second-corrections. Then in Section 4, these results will be bounded such that the worst-case
distortion at a specific pixel in the scene is within an arbitrary number of resolution cells, giving an approximated
distortion-free scene size. Finally in Section 5, the results and comparison of the approaches will be presented.
This paper explores the effect of squint angle on the phase errors introduced by the linear phase assumption in
the polar format algorithm for SAR imaging. The maximum scene radius for an allowable phase error is derived
as a function of squint angle and other parameters. Simulated phase histories for a variety of squint angles are
generated and imaged to demonstrate the bound and the effects encountered when it is exceeded.