The Deputy Under Secretary of Defense for Science and Technology (DUSD/S&T), as part of their ongoing ATR Program, has sponsored an effort to develop and demonstrate methods for evaluating ATR algorithms that utilize multiple data sources, i.e., fusion-based ATR. The AFRL COMPASE Center has formed a strong ATR evaluation team and this paper presents results from this program, focusing on the human-in-the-loop, i.e. assisted image exploitation. Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Often, ATR technology is designed to aid the analyst, but the final decision rests with the human. Traditionally, evaluation of ATR systems has focused mainly on the performance of the algorithm. Assessing the benefits of ATR assistance for the user raises interesting methodological challenges. We will review the critical issues associated with evaluations of human-in-the-loop ATR systems and present a methodology for conducting these evaluations. Experimental design issues addressed in this discussion include training, learning effects, and human factors issues. The evaluation process becomes increasingly complex when data fusion is introduced. Even in the absence of ATR assistance, the simultaneous exploitation of multiple frames of co-registered imagery is not well understood. We will explore how the methodology developed for exploitation of a single source of data can be extended to the fusion setting.
Commercial availability of very high-resolution synthetic aperture radar (SAR) imagery will enable development of automatic target recognition (ATR) algorithms to exploit its rich information content. This availability also permits exploration of both empirical and first principles approaches for predicting ATR performance. This paper describes a recent collection of high resolution SAR imagery. It details the operating conditions represented by the data and provides recommended experiments designed to challenge ATR algorithms and performance prediction. This set of information, along with the imagery, is contained in a Problem Set that will be made available to the community. The imagery is from a Deputy Under Secretary of Defense (DUSD) for Science and Technology (S&T) sponsored collection using the Sandia National Laboratory and General Atomics Lynx Sensor. The Lynx is now available as a commercial off-the-shelf (COTS) sensor. It was designed for use in medium-altitude UAVs and manned platforms. It operates at Ku-band frequency in stripmap, spotlight, and ground moving target indicator modes. Imagery in this collection was collected at 4' resolution and was then also reprocessed to 1' resolution. The collection included several military vehicles with significant variation in target, sensor, and background conditions. Defined experiments in the Problem Set present ATR algorithm development challenges by defining development (training) sets with limited representation of operating conditions and test sets that explore the algorithm's ability to extend to more complex operating conditions. These challenges are critical to military employment of ATR because the real world contains much more variability than it will be possible to explicitly address in an algorithm. For example, neither the storage nor the search through an exhaustive bay of templates is achievable for any realistic application. Thus, advanced developments that allow robust performance in denied conditions will accelerate the transition of ATR to the field. Additional experiments in the Problem Set present challenges in ATR performance prediction. Here, the development imagery provides empirical data to support development of prediction approaches. Test imagery provides an opportunity to validate the prediction technique's ability to, for example, interpolate or extrapolate performance.
In November of 2000, the Deputy Under Secretary of Defense for Science and Technology Sensor Systems (DUSD (S&T/SS)) chartered the ATR Working Group (ATRWG) to develop guidelines for sanctioned Problem Sets. Such Problem Sets are intended for development and test of ATR algorithms and contain comprehensive documentation of the data in them. A problem set provides a consistent basis to examine ATR performance and growth. Problem Sets will, in general, serve multiple purposes. First, they will enable informed decisions by government agencies sponsoring ATR development and transition. Problem Sets standardize the testing and evaluation process, resulting in consistent assessment of ATR performance. Second, they will measure and guide ATR development progress within this standardized framework. Finally, they quantify the state of the art for the community. Problem Sets provide clearly defined operating condition coverage. This encourages ATR developers to consider these critical challenges and allows evaluators to assess over them. Thus the widely distributed development and self-test portions, along with a disciplined methodology documented within the Problem Set, permit ATR developers to address critical issues and describe their accomplishments, while the sequestered portion permits government assessment of state-of-the-art and of transition readiness. This paper discusses the elements of an ATR problem set as a package of data and information that presents a standardized ATR challenge relevant to one or more scenarios. The package includes training and test data containing targets and clutter, truth information, required experiments, and a standardized analytical methodology to assess performance.