In November of 2000, the Deputy Under Secretary of Defense for Science and Technology Sensor Systems (DUSD (S&T/SS)) chartered the ATR Working Group (ATRWG) to develop guidelines for sanctioned Problem Sets. Such Problem Sets are intended for development and test of ATR algorithms and contain comprehensive documentation of the data in them. A problem set provides a consistent basis to examine ATR performance and growth. Problem Sets will, in general, serve multiple purposes. First, they will enable informed decisions by government agencies sponsoring ATR development and transition. Problem Sets standardize the testing and evaluation process, resulting in consistent assessment of ATR performance. Second, they will measure and guide ATR development progress within this standardized framework. Finally, they quantify the state of the art for the community. Problem Sets provide clearly defined operating condition coverage. This encourages ATR developers to consider these critical challenges and allows evaluators to assess over them. Thus the widely distributed development and self-test portions, along with a disciplined methodology documented within the Problem Set, permit ATR developers to address critical issues and describe their accomplishments, while the sequestered portion permits government assessment of state-of-the-art and of transition readiness. This paper discusses the elements of an ATR problem set as a package of data and information that presents a standardized ATR challenge relevant to one or more scenarios. The package includes training and test data containing targets and clutter, truth information, required experiments, and a standardized analytical methodology to assess performance.
The volume of data that must be processed to characterize the performance of target detection algorithms over a complex parameter space requires automated analysis. This paper discusses a methodology for automatically scoring the results from a diversity of detectors producing several different forms of detected regions. The ability to automatically score detector outputs without using full target templates or models has advantages. Using target descriptors-primarily target sizes and locations-reduces the computational cost of matching detected regions against truthed targets in various scenes. It also diminishes the size of and the difficulty of creating an image-truth database. Theoretical considerations are presented. Overcoming issues associated with using limited truth information is explained. Concepts and use of the Auto-Score package are also discussed. The performances of several different laser radar (LADAR) target detectors, applied to imagery containing scenes with targets and both natural and man-made clutter, have been characterized with the aid of Auto-Score. Automatic scoring examples are taken from this domain. However, the scoring process is applicable to detectors operating on other problems and other kinds of data as well. The target-descriptor scoring concept and Auto-Score implementation were originated to support the development of a configurable automatic target recognition (ATR) system for LADAR data, under the auspices of the Office of Naval Research.