While modern sensors allow vast amounts of data to be collected in seconds, it can take weeks or months to analyze the data and determine sensor performance. Because of this lag between data collection and usable results, issues such as sensor calibration and algorithm biases are not detected until well after an experiment or test, when it is too late to correct them.
Recognizing that a rapid performance snapshot would be extremely valuable in many situations, the AFRL COMPASE Center developed tools to produce receiver operating characteristic (ROC) curves in near real-time for an advanced technology demonstration (ATD) program. These tools, called real-time (RT-ROC) and identification (ID-ROC), gave the evaluation team timely insight into overall system performance, and when substandard results were obtained, diagnostic tests were initiated to determine the underlying causes. These tools have been experiment demonstrated, allowing the COMPASE team to find and fix a sensor error in a matter of hours.
This paper will concentrate on RT-ROC and will address analysis tool requirements, operation of the tool during experiments, a walkthrough of the tool using simulated data, and future uses for this application.
The Deputy Under Secretary of Defense for Science and Technology (DUSD/S&T), as part of their ongoing ATR Program, has sponsored an effort to develop and demonstrate methods for evaluating ATR algorithms that utilize multiple data sources, i.e., fusion-based ATR. The AFRL COMPASE Center has formed a strong ATR evaluation team and this paper presents results from this program, focusing on the human-in-the-loop, i.e. assisted image exploitation. Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Often, ATR technology is designed to aid the analyst, but the final decision rests with the human. Traditionally, evaluation of ATR systems has focused mainly on the performance of the algorithm. Assessing the benefits of ATR assistance for the user raises interesting methodological challenges. We will review the critical issues associated with evaluations of human-in-the-loop ATR systems and present a methodology for conducting these evaluations. Experimental design issues addressed in this discussion include training, learning effects, and human factors issues. The evaluation process becomes increasingly complex when data fusion is introduced. Even in the absence of ATR assistance, the simultaneous exploitation of multiple frames of co-registered imagery is not well understood. We will explore how the methodology developed for exploitation of a single source of data can be extended to the fusion setting.
Early in almost every engineering project, a decision must be made about tools; should I buy off-the-shelf tools or should I develop my own. Either choice can involve significant cost and risk. Off-the-shelf tools may be readily available, but they can be expensive to purchase and to maintain licenses, and may not be flexible enough to satisfy all project requirements. On the other hand, developing new tools permits great flexibility, but it can be time- (and budget-) consuming, and the end product still may not work as intended. Open source software has the advantages of both approaches without many of the pitfalls. This paper examines the concept of open source software, including its history, unique culture, and informal yet closely followed conventions. These characteristics influence the quality and quantity of software available, and ultimately its suitability for serious ATR development work. We give an example where Python, an open source scripting language, and OpenEV, a viewing and analysis tool for geospatial data, have been incorporated into ATR performance evaluation projects. While this case highlights the successful use of open source tools, we also offer important insight into risks associated with this approach.