The CECOM Center for Night Vision and Electro-Optics (C2NVEO) is pursuing a broad based effort to develop Automatic Target Recognizers for a variety of tactical Army applications. The effort includes the development of improved thermal imaging sensors that have fewer artifacts and better sensitivity, uniformity and dynamic range than currently deployed infrared imaging systems. These imagers, along with other sensors, are being used to collect field data of military vehicles and their environment. This digital imagery is being added to an expanding data base that also contains hybrid and synthetic sensor data providing a controlled variability unattainable with real imagery alone. The real imagery provides the validation of this characterized sensor data base. Sensor data from non-imaging sensors is being added to encompass multisensor applications. A facility has been established for training, and testing ATR's where this data can be used in conjunction with a physical terrain board and a sensor test station. The facility has demonstrated the capability to rapidly assess the performance of several ATR's. The current ATR's being investigated are instrumented for rapid, detailed analysis of the algorithms' functions. Full programmability allows investigation of competing algorithms without designing new circuitry. Additional algorithm improvements are being investigated. Techniques using neural nets and optical processing are being pursued. Assembly of "submicron" components using miniaturized packaging concepts is leading to demonstrations of the feasibility of ATR's within stringent platform constraints.
The Alliant Techsystems Multi-function Target Acquisition Processor (MTAP) was developed under the direction of the U.S. Army's CECOM Center for Night Vision and Electro-Optics. MTAP is a hardware and software system that can efficiently host a wide range of image and signal processing algorithms that run in real-time. The MTAP processor is based on a configurable and extensible architecture which makes use of commercial hardware, a High Order Language, and a user-friendly interface resulting in an efficient development system. Algorithm evaluation is aided by the Automated-Instrumentation (Auto-I) system, access to intermediate images, and access to decision history information. The Auto-I system is tightly coupled to MTAP providing experiment control, performance evaluation, and algorithm analysis capabilities. Access to over 40 intermediate image streams and the full decision history during an experiment allows detailed analysis of the algorithms. The tight integration of the development and evaluation environments allow for rapid development of real-time Automatic Target Recognition (ATR) algorithms on the MTAP system.
This paper reviews and describes methods for combining multimode sensor data. The context for the multimode sensor applications is an autonomous precision guided weapon, air-to-ground scenario. The first part of the paper reviews dual mode fusion architectures. Theoretical and measured performance results are referenced and extended. We introduce a fusion architecture hierarchy including post decision combiner rules, pre-decision combiner statistics, feature and raw data concomitant combiners. The architecture section concludes with a discussion and example of dual mode synergy performance versus sensor mode inequality. The second part of the paper describes the cost effectiveness benefits of dual mode and single mode sensors. Results from a many-on-many Monte Carlo mission effectiveness simulation are used to help quantify the multimode sensor benefits. In some cases synergistic multimode performance gain is a sufficient justification for adding a second or third sensor mode. In many cases the benefit is the extended and more robust operation over large search areas in the presence of countermeasures and adverse weather.
The principal focus of Automatic Object Recognition (AOR) involves the generation of appropriate algorithms to process the output of multi-spectral sensor arrays. Given the high dimensionality that characterizes the signatures of targets of interest, it is normally impossible to satisfy the need for raw signature data by means of measurement records alone. Individual sensor characteristics in conjunction with aspect-angle dependence, target and background configuration (singly and in synergism), and multi-spectral tradeoffs inexorably lead to a requirement for predictive signature modeling methods. By means of this stratagem, a measured signature data base can be leveraged significantly, improving the fidelity of the overall simulation. Irrespective of the specific representation used for a three-dimensional geometry and material database, rarely does a predictive signature application code read that database directly. Rather, a specific interrogation method is used to pass particular geometric and material attributes to the application code. Clearly the nature of the physics employed in the application is both enabled and constrained by the form of the interrogation process used. In this paper, several examples of predictive radar codes are given, illustrating several strikingly different ways of linking geometry to applications. Following those examples the interface methods known to the authors will be described. While many of the techniques have already been implemented, some are currently in development. In addition, the utility of various techniques will be related to particular application codes.
The simulation of target signatures from 0.4 um to 30 cm wavelengths is a com- plex mathematical task that traditionally has been approached by developing multi- ple, independent computer programs. These programs often require specially-trained users and have little, if any, system inter-operability. ERIM has developed a uni- fied approach to multi-sensor, multi-wavelength simulation of target signatures that overcomes many of these limitations. The key feature of the methodology is the use of generic ray tracing tools to automatically produce simulated target signatures over a broad range of wavelengths from a single geometric target model. An overview of the simulation system will be given along with a brief description of each inte- grated simulation code employed by the program. Examples of multi-sensor target signature simulation will be presented.
Recognition of targets in flir imagery has been a goal of military weapon systems since the initial development of flir sensors. Reliable systems to automatically recognize targets in flir imagery have thus far eluded the combined efforts of the DOD services. Historical approaches have concentrated on adaptation of pattern recognition techniques from visible imagery (TV) target recognition. Recent research has suggested that consideration of target characteristics unique to IR imaging such as self emission due to thermal mass may lead to improved recognition performance. In order to effectively utilize these characteristics, predictive models are needed to establish the combination of viewing conditions and target states for which the target's thermal characteristics manifest themselves. This paper will focus upon the use of signature prediction models as a component of a recognition algorithm in the context of model-based vision (MBV).
This paper considers the importance of sensor models to model-based recognition applications. The impact of the explicit representation of sensor models and of the sensor attributes themselves (e.g., the particular geometric transformation, whether active or passive, specular or diffuse, reflective or emissive) are illustrated using synthetic aperture radar, infrared, and CO2 laser radar target recognition examples. The model-based recognition problem is formalized using probability theory to partition the recognition process into (1) an estimation where the situation parameters (e.g., sensor, target, background) are estimated and (2) a hypothesis test where the current hypothesis (i.e., the constrained model given the estimated parameter values) is tested based on the sensed data. It is shown that strong sensor models suggest problem structure that can be exploited to develop robust indexing and model refinement / parameter estimation algorithms. It is also shown that strong sensor models form a basis for rigorous match evaluation during the hypothesis test phase of the recognition process.
One of the most critical problems in Automatic Target Recognition systems (ATR) is multi-scenario adaptation. Today's ATR systems perform unpredictably i.e perform well in certain scenarios, and they perform poorly in others. Unless ATR systems can be made adaptable, their utility in battlefield missions remain questionable. We have developed (under internal research and development) a novel concept called Knowledge and Model Based Algorithm Adaptation (KMBAA). KMBAA automatically adapts the ATR parameters as the scenario changes so that ATR can maintain optimum performance. The KMBAA approach has been tested with a non-real-time ATR simulation system and has demonstrated a significant improvement in detection, false alarm rate reduction and segmentation accuracy performance.
Significant advances in the field of signal and image processing have resulted in the development of a "Tech Base" of Automatic Target Recognition (ATR) technology which has reached a maturity level sufficient for its insertion into Military Weapon Systems. The 1988 Defense Science Board Task Force on Image Recognition reached this conclusion' and AIR Technblogy ranks high on the list of Military Critical technologies in a recent report submitted to the U.S. Congress. The developed ATR Tech Base can be divided into the three significant parts of sensor, algorithm and processor technology. Sensors are tasked with extracting information from the scene of interest. Algorithms are required to process the extracted information into a useable format. Processors are necessary to implement the algorithms into real world systems. The advent of Model Based Approaches to algorithm design have been instrumental in maturing the ATR algorithm tech base to a level which today supports the insertion of AIR technology into future Military Weapon Systems. The Military services have identified a number of Missions requiring the use of Some missions can be supported with today's AIR Tech Base but many are exceedingly difficult and will require substantial advances to be made before the missions can be accomplished. In the ATR Tech Base, the area requiring the most growth is algorithm technology. To support the required growth in algorithm technology, processors must be programmable to enable rapid insertion of algorithm advances as they occur. This paper will overview selected Military missions and the ATR algorithms requirements which must be met to accomplish those missions. The use of Model Based Algorithms to accomplish the Fixed High Value Target mission will be used as an example of the application of Model Based Algorithms.
The development and use of confidence intervals for automatic target recogni- tion or cueing (ATR or ATC) evaluation will be considered. First, the concepts of confidence intervals and performance curves will be briefly reviewed and the con- cept of a confidence interval for a performance curve outlined. Second, the moti- vation for developing and using them will be described in terms of systems analy- sis and ATR component evaluation. Third, the role of experimental design and scenarios will be briefly examined. Fourth, the need for optimization of ATR algorithm performance will be examined, along with the resulting implications on the formulation of stochastic process models that provide the basis for perfor- mance evaluation and for the confidence intervals. Fifth, the construction of confidence intervals for performance curves will be developed. Finally, applica- tions and extensions of the confidence intervals will be suggested.
Image processing to accomplish automatic recognition of military vehicles has promised increased weapons systems effectiveness and reduced timelines for a number of Department of Defense missions. Automatic Target Recognizers (ATR) are often claimed to be able to recognize many different ground vehicles as possible targets in military air-to- surface targeting applications. The targeting scenario conditions include different vehicle poses and histories as well as a variety of imaging geometries, intervening atmospheres, and background environments. Testing these ATR subsystems in most cases has been limited to a handful of the scenario conditions of interest, as is represented by imagery collected with the desired imaging sensor. The question naturally arises as to how robust the performance of the ATR is for all scenario conditions of interest, not just for the set of imagery upon which an algorithm was trained.
Current development, training, and testing of automatic target recognizers/cuers relies almost exclusively on image data taken at field sites or from physical terrain boards. Each of these approaches has several advantages as well as disadvantages. For example, field test data are severely limited in the variety of terrain and targets typically available. In addition, the environment is too variable to support parametric testing of processors. On the other hand, the targets and their signatures are real as is atmospheric attenuation, sensor settings, sensor artifacts, etc. In contrast, the physical terrain board is highly controllable and is ideally suited for parametric studies of processors. However, the physical terrain boards are simulations of targets and backgrounds and typically do not include the important contributions of sensor-specific noise or atmospheric attenuation on target signatures. More importantly, physical terrain boards have not yet incorporated a method for multi-sensor testing. This paper will describe in detail the advantages and disadvantages of field and physical terrain board testing and will present the concept of a digital terrain board that addresses many of the limitations of previous approaches while not sacrificing their advantages. Specific approaches will be discussed and preliminary results of testing processors with several gradations of synthetic imagery will be presented.
Adequate tools for diagnosis and evaluation of Automatic Target Recognition (ATR) systems are very critical for their successful development. In this paper we describe system called: Automated Instrumentation and Evaluation (Auto-I). Auto-I provides many of the needed capabilities for rapid testing and evaluation of ATR systems. It also provides a module for automatic adaptation of algorithms parameters using algorithms performance models, optimization and Artificial Intelligence techniques. The current design of Auto-I is modular, it is designed so it can be interfaced to other ATR systems .
An automatic recognition system that can recognize objects of interest no matter where it is being used or in what scenario it is being deployed is of immense value in numerous applications areas such as robotic , automatic target recognition(ATR), reconnaissance , and remote sensing. The difficulty in building such a system lies in the decade long observation that candidate automatic object recognition (AOR)systems perform well when they are used in domains for which they have been initially trained for . When the environmental conditions , scene content or scenario changes the behavior of such systems become very erratic and unpredictable. In this paper we describe a technique for the automatic adaptation of the multi sensor automatic object recognition systems.The adaptation covers both selection of optimum sensor frequency bands and the AOR's internal algorithmic parameters. The adaptation is done by creating empirical models of the AOR's performance measures as functions of data metrics , internal algorithms' parameters, sensors' wavelengths, types and sensor combinations. Then the optimum values of the internal parameters and the optimum sensor frequencies will be computed by optimizing the performance models as new signal metrics are obtained. These metrics vary due to the outside changes in the scene , scenario , or environmental conditions. The technique is applicable to a wide range of automatic recognition systems. It provides for the first time an integrated approach for simultaneous and automatic algorithm parameter and sensor wavelength adaptation
This paper will review recent advances in the applications of artificial neural network technology to problems in automatic object recognition. The application of feedforward networks for segmentation, feature extraction, and classification of objects in Forward Looking Infrared (FLIR) and laser radar range scenes will be presented. Biologi- cally inspired Gabor functions will be shown to be a viable alternative to heuristic image processing techniques for segmentation. The use of local transforms, such as the Gabor transform, fed into a feedforward network is proposed as an architecture for neural based segmentation. Techniques for classification of segmented blobs will be reviewed along with neural network procedures for deter- mining relevant features. A brief review of previous work comparing neural network based classifiers to conventional Bayesian and k nearest-neighbor techniques will be presented. Results from testing several alternative learning al- gorithms for these neural network classifiers are presented. A technique for fusing information from multiple sensors using neural networks is presented. The theoretical relationship between a multilayer perceptron trained using back propagation for classification and the Bayes optimal discriminant function is explained.