Proc. SPIE. 6568, Algorithms for Synthetic Aperture Radar Imagery XIV
KEYWORDS: Automatic target recognition, Performance modeling, Data modeling, Synthetic aperture radar, System on a chip, Detection and tracking algorithms, Databases, Statistical modeling, Algorithm development, Systems modeling
Automatic target recognition (ATR) performance models are needed for online adaptation and for effective use (e.g., in
fusion) of ATR products. We present empirical models focused on synthetic aperture radar (SAR) ATR algorithms.
These models are not ATR algorithms in themselves; rather they are models of ATRs developed with the intention of
capturing the behavior, at least on a statistical basis, of a reference ATR algorithm. The model covariates (or inputs)
might include the ATR operating conditions (sensor, target, and environment), ATR training parameters, etc. The
model might produce performance metrics (Pid, Pd, Pfa, etc.) or individual ATR decisions. "Scores" are an
intermediate product of many ATRs, which then go through a relatively simple decision rule. Our model has a parallel
structure, first modeling the score production and then mapping scores to model outputs. From a regression perspective,
it is impossible to predict individual ATR outcomes for all possible values of this covariate space since samples are only
available for small subsets of the total space. Given this limitation, and absent a purely theoretical model meaningfully
matched to the true complexity of this problem, our approach is to examine the empirical behavior of scores across
various operating conditions, and identify trends and characteristics of the scores that are apparently predictable. Many
of the scores available for training are in so-called standard operating conditions (SOC), and a far smaller number are in
so-called extended operating conditions (EOCs). The influence of the EOCs on scores and ATR decisions are examined
We consider the joint inverse problems of sensor data registration and automatic target recognition. Single-platform, multi-sensor registration is posed as a model-based, data fusion problem using Bayesian and maximum likelihood frameworks. The sensor model parameters typically consist of platform pose parameters, sensor pointing angles, and internal calibration factors, and these are used to define a transformation that maps raw data recorded in the sensor frame to a ground-referenced, world coordinate system.
The fusion estimation problem is one joint inversion since the
sensor model parameters common to multiple sensors are simultaneously estimated (along with sensor-specific model parameters). For the ATR problem we pose the joint optimization problem over these sensor model parameters (constrained by the global scene) and target model parameters (e.g., for selected target chips). In addition, we pose a cooperative inversion approach that captures uncertainty from the system model estimation process for use in a refined ATR inversion. The latter consists of a search over target model parameters and a constrained system model parameter space with realization samples consistent with the estimated system model covariance. Estimation robustness is achieved through use of a hybrid global/local search method (to avoid final convergence to local minima), robust kernels that down-weight data residual outliers (generated from test and reference image feature correspondences), and the use of multi-sensor data to increase the number and diversity of data constraints. In summary, we have developed a model-based fusion approach which draws on well-developed methods in photogrammetry, computer vision and automatic target recognition for enhanced registration and recognition performance.
Proc. SPIE. 5808, Algorithms for Synthetic Aperture Radar Imagery XII
KEYWORDS: Performance modeling, Data modeling, Sensors, Automatic target recognition, Systems modeling, Mathematical modeling, Monte Carlo methods, Algorithm development, Synthetic aperture radar, Diffractive optical elements
Automatic target recognition (ATR) performance modeling is dependent on model complexity, training data, and test analysis. In order to compare different ATR algorithms, we develop a fidelity score that characterizes the quality of different algorithms to meet real-world conditions. For instance, a higher fidelity ATR performance model (PM) is robust over many operating conditions (sensors, targets, environments). An ATR model that is run for one terrain, might not be applicable for all terrains, yet its operating manual clarifies its range of applicability. In this paper, we discuss a fidelity score that captures the performance application of ATR models and can be extended to different sensors over many operating conditions. The modeling quantification testing can be used as a fidelity score, validation metric, or guidance for model improvements. The goal is to provide a framework to instantiate a high fidelity model that captures theoretical, simulated, experimental, and real world data performance for use in a dynamic sensor manager.
We develop a radar-based automatic target recognition approach for
partially occluded objects. The approach may be variously posed as
an optimization problem in the phase history, scene reflectivity
and feature domains. The latter consists of point scattering
features estimated from the phase histories or corresponding
images. We adopt simple occlusion models in which the physical
scattering responses (isotropic scattering centers, attributed
scatterers, etc.) can be occluded in any combination. The
formulation supports the use of prior occlusion models (e.g., that
occlusion is spatially correlated rather than randomly
distributed). We introduce a physics-based noise covariance model
for use in cost or objective functions. Occlusion model estimation
is a combinatorial problem since the optimal subset of scatterers
must be discovered from a potentially much larger set. Further,
the number of occluded scatterers must be estimated as a part of
the solution. We apply a genetic algorithm to solve the
combinatorial problem, and we provide a simple demonstration
example using synthetic data.
Proc. SPIE. 5095, Algorithms for Synthetic Aperture Radar Imagery X
KEYWORDS: Data modeling, Performance modeling, Automatic target recognition, Detection and tracking algorithms, Synthetic aperture radar, Model-based design, Systems modeling, Monte Carlo methods, Sensors, Feature extraction
Performance of automatic target recognition (ATR) systems depends on numerous factors including the mission description, operating conditions, sensor modality, and ATR algorithm itself. Performance prediction models sensitive to these factors could be applied to ATR algorithm design, mission planning, sensor resource management, and data collection design for algorithm verification. Ideally, such a model would return measures of performance (MOPs) such as probability of detection (Pd), correct classification (Pc), and false alarm (Pfa), all as a function of the relevant predictor variables. Here we discuss the challenges of model-based and data-based approaches to performance prediction, concentrating especially on the synthetic aperture radar (SAR) modality. Our principal conclusion for model-based performance models (predictive models derived from fundamental physics- and statistics-based considerations) is that analytical progress can be made for performance of ATR system components, but that performance prediction for an entire ATR system under realistic conditions will likely require the combined use of Monte Carlo
simulations, analytical development, and careful comparison to MOPs from real experiments. The latter are valuable for their high-fidelity, but have a limited range of applicability. Our principal conclusion for data-based performance models (that fit empirically derived MOPs) offer a potentially important means for extending the utility of empirical results. However, great care must be taken in their construction due to the necessarily sparse sampling of operating conditions, the high-dimensionality of the input
space, and the diverse character of the predictor variables. Also the applicability of such models for extrapolation is an open question.
The recently developed physics-based "mean field" formalism for
efficiently computing the time-domain response of compact metallic
targets is applied to the solution of model inverse problems for
remote classification of buried UXO-like targets. The formalism is
first used to compute model forward scattering data, in the form
of time-domain decay curves as measured by EMI or magnetic field,
for a sequence of canonical ellipsoidal target shapes of various
geometries. This data is subsequently used as input to a genetic
algorithm-based inversion routine, in which the target parameter
model space, comprised of target shape, conductivity, location,
orientation, etc., is efficiently searched to find the best fit to
the data. Global search procedures, such as genetic algorithms,
typically require the forward scattering solution for hundreds, or
perhaps thousands, of candidate target models. To be practical,
these forward solutions must be rapidly computable. Our solution
approach has been specifically designed to meet this requirement. Of special interest is the ability of the inversion algorithm to
distinguish robustly between UXO-like targets, modelled here as
cylindrically shaped prolate spheroids, and, say, flat sheet-like
clutter targets, modelled as very thin oblate spheroids.
The performance of tracking systems depends on numerous factors including the scenario, operating conditions, and choice of tracker algorithms. For tracker system design, mission planning, and sensor resource management, the availability of a tracker performance model (TPM) for the standard measures of performance (MOPs) would be of high practical value. Ideally, the TPM has high computational efficiency, and is insensitive to the particular low-level details of highly complex algorithms and unimportant operating conditions. These characteristics would eliminate the need for high fidelity Monte Carlo simulations that are expensive and time consuming. In this paper, we describe a performance prediction model that generates track life distributions and other MOPs. The model employs a simplified Monte Carlo simulation that accounts for sensor orbits, sensor coverage, target dynamics. A key feature is an analytical expression that approximates the probability of correct association (PCA) among reports and tracks. The expression for the PCA that we use was developed by Mori et. al. for simplified scenarios where there is a single class of targets, the noise is Gaussian, and the covariance matrices are identical for all targets. Based on heuristic considerations, we extend this result to the case of road-constrained tracking where both on-road and off-road targets are present. We investigate the validity of the proposed expression by means of Monte Carlo simulations, and present preliminary results of a validation study that compares the performance of an actual tracker with the performance predictions of our model.
We investigate the complexity of template-based ATR algorithms using SAR imagery as an example. Performance measures (such as Pid) of such algorithms typically improve with increasing number of stored reference templates. This presumes, of course, that the training templates contain adequate statistical sampling of the range of observed or test templates. The tradeoff of improved performance is that computational complexity and the expense of algorithm development training template generation (synthetic and/or experimental) increases as well. Therefore, for practical implementations it is useful to characterize ATR problem complexity and to identify strategies to mitigate it. We adopt for this problem a complexity metric defined simply as the size of the minimal subset of stored templates drawn from an available training population that yields a specified Pid. Straightforward enumeration and testing of all possible template sets leads to a combinatorial explosion. Here we consider template selection strategies that are far more practical and apply these to a SAR- and template-based target identification problem. Our database of training templates consists of targets viewed at 3-degree increments in pose (azimuth). The template selection methods we investigate include uniform sampling, sequential forward search (also known as greedy selection), and adaptive floating search. The numerical results demonstrate that the complexity metric increases with intrinsic problem difficulty, and that template sets selected using the greedy method significantly outperform uniformly sampled template sets of the same size. The adaptive method, which is far more computationally expensive, selects template sets that outperform those selected by the greedy technique, but the small reduction in template set size was not significant for the specific examples considered here.
A handhold mine detection system is under development using a 2D array of spin dependent tunneling (SDT) magnetorestrictive sensors, which will measure the x, y and z scalar components of the electromagnetic (EM) field. SDT sensors directly measure the EM field component along an axis of the sensor over a wide frequency and intensity range, which make them ideal EM sensors. The sensors are small and are relatively inexpensive due to the massive investment in this technology by the computer industry for their use in disc storage devices. A system was designed with primary emphasis on the unique capabilities of the sensor elements and sensor array design for landmine detection and discrimination. Much of the early work concentrated on theoretical models verified with measured laboratory time domain EM response of metallic components of typical low metal landmines. The modeling results have provided the information needed to define performance requirements for the SDT sensor and a design of an array of SDT sensor to measure the x, y and z spatial components expected from the landmines. A parallel effort to develop the supporting theory for optimal interpretation of the multi-axis sensor array, has resulted in significant progress in developing an improved methodology for distinguishing the signature of landmine targets from metallic clutter. We have adopted an integrated approach to the sensor design in which the data requirements for effective discrimination have driven the design while meeting the practical and engineering requirements as well.