This paper introduces an innovative framework for the development of multi-sensor datasets for target recognition. This framework goes beyond the paradigm of generating synthetic data to augment algorithm training; it employs carefully generated training and test data to characterize algorithm performance over any desired operating conditions, culminating in the ability to generate algorithm performance models for use in fusion, sensor resource management, and mission simulation. The current system instantiates the full path, from operating conditions to synthetic data to results, for synthetic aperture radar. Fully integrated electro-optic and laser radar paths, to be completed in 2019, will comprise a complete multi-sensor testbed for performance prediction. Future work will add sensor modes as well as automated decision and feature fusion for target identification.
Proc. SPIE. 6978, Visual Information Processing XVII
KEYWORDS: Detection and tracking algorithms, Data modeling, Sensors, 3D modeling, Automatic target recognition, Systems engineering, Data centers, Algorithm development, Performance modeling, Systems modeling
The purpose of the Automatic Target Recognition (ATR) Center is to develop an environment conducive to producing
theoretical and practical advances in the field of ATR. This will be accomplished by fostering intellectual growth of
ATR practitioners at all levels. From an initial focus on students and performance modeling, the Center's efforts are
extending to professionals in government, academia, and industry. The ATR Center will advance the state of the art in
ATR through collaboration between these researchers.
To monitor how well the Center is achieving its goals, several tangible products have been identified: graduate student
research, publicly available data and associated challenge problems, a wiki to capture the body of knowledge associated
with ATR, development of stronger relationships with the users of ATR technology, development of a curriculum for
ATR system development, and maintenance of documents that describe the state-of-the-art in ATR.
This presentation and accompanying paper develop the motivation for the ATR Center, provide detail on the Center's
products, describe the Center's business model, and highlight several new data sets and challenge problems. The
"persistent and layered sensing" context and other technical themes in which this research is couched are also presented.
Finally, and most importantly, we will discuss how industry, academia, and government can participate in this alliance
and invite comments on the plans for the third phase of the Center.
The Defense Advanced Research Projects Agency (DARPA) Video Verification of Identity (VIVID) program has
as its goal the development of the best video tracker ever. This goal is reached through a philosophy of on-the-fly
target modeling and the use of three distinct modules: a multiple-target tracker, a confirmatory identification
module, and a collateral damage avoidance/moving target detection module. Over the two years of VIVID
Phase I, progress appraisal of the ATR-like confirmatory identification module was provided to DARPA by the
Air Force Research Laboratory Comprehensive Performance Assessment of Sensor Exploitation (COMPASE)
Center through regular evaluations. This document begins with an overview of the VIVID system and its
approach to solving the multiple-target tracking problem. A survey of the data collected under VIVID auspices
and their use in the evaluation are then described, along with the operating conditions relevant to confirmatory
identification. Finally, the evaluation structure is presented in detail, including metrics, experiment design,
experiment construction techniques, and support tools.
Because of its fine wavelength resolution, hyperspectral imaging (HSI) offers the possibility of detecting and identifying objects of interest by their spectral characteristics. The Automatic Target Cueing, Detection and Recognition (ATC/D/R) community is developing new methods to predict and measure HSI ATC/D/R systems performance. The variation of spectral signatures due to target characteristics, atmospheric effects, and other environmental factors contribute to the challenge of developing and evaluating robust algorithms for HSI ATC/D/R systems. A rigorous method for test and evaluation is necessary to determine system performance and define the most efficient and effective sensor/algorithm solutions for a proposed mission. The AFRL Sensors Directorate Comprehensive Performance Assessment of Sensor Exploitation (COMPASE) Center has developed standardized tools and methods that permit performance comparison of candidate ATC algorithms1,2. This paper defines the methodology employed for an independent evaluation of HSI ATC algorithms. The performance metrics, truthing and scoring techniques, and the importance of understanding the operating conditions (OCs) represented by each data set are discussed. The OC definitions for spectral systems are different from the OCs as defined by radar systems. The environmental considerations drive data collection planning and truthing requirements. Knowledge of performance of an algorithm in different OCs is essential information when considering the transition of an HSI sensor/algorithm system or the design of future HS algorithms.
Commercial availability of very high-resolution synthetic aperture radar (SAR) imagery will enable development of automatic target recognition (ATR) algorithms to exploit its rich information content. This availability also permits exploration of both empirical and first principles approaches for predicting ATR performance. This paper describes a recent collection of high resolution SAR imagery. It details the operating conditions represented by the data and provides recommended experiments designed to challenge ATR algorithms and performance prediction. This set of information, along with the imagery, is contained in a Problem Set that will be made available to the community. The imagery is from a Deputy Under Secretary of Defense (DUSD) for Science and Technology (S&T) sponsored collection using the Sandia National Laboratory and General Atomics Lynx Sensor. The Lynx is now available as a commercial off-the-shelf (COTS) sensor. It was designed for use in medium-altitude UAVs and manned platforms. It operates at Ku-band frequency in stripmap, spotlight, and ground moving target indicator modes. Imagery in this collection was collected at 4' resolution and was then also reprocessed to 1' resolution. The collection included several military vehicles with significant variation in target, sensor, and background conditions. Defined experiments in the Problem Set present ATR algorithm development challenges by defining development (training) sets with limited representation of operating conditions and test sets that explore the algorithm's ability to extend to more complex operating conditions. These challenges are critical to military employment of ATR because the real world contains much more variability than it will be possible to explicitly address in an algorithm. For example, neither the storage nor the search through an exhaustive bay of templates is achievable for any realistic application. Thus, advanced developments that allow robust performance in denied conditions will accelerate the transition of ATR to the field. Additional experiments in the Problem Set present challenges in ATR performance prediction. Here, the development imagery provides empirical data to support development of prediction approaches. Test imagery provides an opportunity to validate the prediction technique's ability to, for example, interpolate or extrapolate performance.
The AFRL COMPASE Center has developed and applied a disciplined methodology for the evaluation of recognition systems. This paper explores an element of that methodology related to the confusion matrix as a tabulation of experiment outcomes and its corresponding summary performance measures. To this end, the paper introduces terminology and the confusion matrix structure for experiment results. It provides several examples - from current Air Force programs - of summary performance measures and their relationship to the confusion matrix. Finally it considers the advantages and disadvantages of these summary performance measures and points to effective strategies for selecting such measures.
Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Although benefits may be realized through ATR processing of a single data source, fusion of information across multiple images and multiple sensors promises significant performance gains. A major challenge, as ATR fusion technologies mature, will be the establishment of sound methods for evaluating ATR performance in the context of data fusion. This paper explores the issues associated with evaluations of ATR algorithms that exploit data fusion. Three major areas of concern are examined, as we develop approaches for addressing the fusion-based evaluation problem: Characterization of the testing problem: The concept of operating conditions, which characterize the test problem, requires some generalization in the fusion setting. For example, conditions such as articulation or model variant, which are of concern for synthetic aperture radar (SAR) data, may be of minor importance for hyperspectral imaging (HSI) methods. Conversely, solar illumination conditions, which have no effect on the SAR signature, will be critical for spectral based target recognition. In addition, the fusion process may introduce new operating conditions, such as registration accuracy. Developing image truth and scoring rules: The introduction of multiple data sources raises questions about what constitutes successful target detection. Ground truth must be associated with multiple data sources to score performance. Performance metrics: New performance metrics, that go beyond simple detection, identification, and false alarm rates, are needed to characterize performance in the context of image fusion. In particular, algorithm developers would benefit from an understanding of the salient features from each data source and how these features interact to produce the observed system performance.
In November of 2000, the Deputy Under Secretary of Defense for Science and Technology Sensor Systems (DUSD (S&T/SS)) chartered the ATR Working Group (ATRWG) to develop guidelines for sanctioned Problem Sets. Such Problem Sets are intended for development and test of ATR algorithms and contain comprehensive documentation of the data in them. A problem set provides a consistent basis to examine ATR performance and growth. Problem Sets will, in general, serve multiple purposes. First, they will enable informed decisions by government agencies sponsoring ATR development and transition. Problem Sets standardize the testing and evaluation process, resulting in consistent assessment of ATR performance. Second, they will measure and guide ATR development progress within this standardized framework. Finally, they quantify the state of the art for the community. Problem Sets provide clearly defined operating condition coverage. This encourages ATR developers to consider these critical challenges and allows evaluators to assess over them. Thus the widely distributed development and self-test portions, along with a disciplined methodology documented within the Problem Set, permit ATR developers to address critical issues and describe their accomplishments, while the sequestered portion permits government assessment of state-of-the-art and of transition readiness. This paper discusses the elements of an ATR problem set as a package of data and information that presents a standardized ATR challenge relevant to one or more scenarios. The package includes training and test data containing targets and clutter, truth information, required experiments, and a standardized analytical methodology to assess performance.
This paper defines the ATR problem outside the boundaries of the statistical pattern recognition (SPR) problem. It is believed that the state of the art supports successful application of SPR strategies to solve recognition problems and to the extent that the automatic target recognition (ATR) problem and the SPR problem are the same, the ATR problem is quite solvable. However, ATR remains problematic is its full realization and promise and has only been solved under a set of constrained conditions - those which map into the SPR problem. These are problems where the conditions of the training set are totally representative of the conditions under the test set. The purpose of this paper is to facilitate further progress in ATR development by defining the ATR problem in a more general way that is believed to be more representative of the actual ATR problem facing various ATR users rather than the more restricted SPR definition.
This paper introduces concepts that, we hope, will help move the discussion of ATR evaluation in a direction that addresses long standing difficulties associated with getting test results that are meaningful to the program managers as they compare performance across technologies, to the users as they consider applications, and to the developers as they consider alternative approaches to the many ATR challenges. The paper is motivated by the recent need to independently evaluate an ATR system whose design is model-driven, particularly the DARPA/WL moving and stationary target acquisition and recognition program. There are two complementary classes of concepts. One class, which we call performance, includes accuracy, extensibility, robustness, and utility. These performance concepts encourage explicit consideration of the relationship between the test data, the training data, and data from modeled conditions. The other class, which we call cost includes efficiency, scalability, and synthetic trainability. Cost concepts help bring out some of the unique characteristics of the costs associated with ATR design and operation.