The Behavior Enhanced Heterogeneous Autonomous Vehicle Environment (BEHAVE) is a distributed system for the command and control of multiple Unmanned Vehicle Systems (UVS) with various sensor payloads (EO, infrared and radar) and mission roles (combat, reconnaissance, penetrator, relay) working in cooperation to fulfill mission goals in light of encountered threats, vehicle damage, and mission redirects. In its current form, BEHAVE provides UVS dynamic route planning/replanning, autonomous vehicle control, platform self-awareness, autonomous threat response, and muti-vehicle cooperation. This paper focuses on BEHAVE's heterogeneous autonomous UVS team cooperation achieved through the transformation of UVS operational doctrine into UVS team behaviors. This level of tactics provides the initial high-level cooperative control guidance and plans for multiple UVSs operating to achieve specific mission goal. BEHAVE's heterogeneous UVS behaviors include inter-vehicle cueing capability on coupled missions based on new threats, targets, and foreshadowing changes in environments, optimizing individual UVS mission roles, enhanced reassignment of mission goals based upon resources consumed and threats encountered, and multi UVS team threat behavior. Threat behaviors include logic incorporated for team scenarios such as drawing out or confusing threats.
The Force Detection and Identification System (FDIS) provides an operationally capable near-real-time platform appropriate for the evaluation of state-of-the-art imagery exploitation components and architectural design principles on scalable platforms. FDIS's architectural features include: a highly modular component design allowing rapid component interchange, multiple intercomponent datapaths which support both fine- and course-grained parallelism, an infrastructure which supports heterogeneous computing on a range of high-performance computing platforms, and conceptual decoupling between image processing and non-image processing components while supporting multi-level evidence fusion. While none of these features are individually unique, when combined they represent a state-of-the-art imagery exploitation system. FDIS has demonstrated probability of detection and false alarm rates consistent with other SAR-based exploitation systems. FDIS, however, requires fewer computing resources, supports rapid insertion of new or changed components to support emerging technologies with an ease not encountered in legacy systems, and is smoothly scalable. In addition to exploiting these novel architectural features, FDIS includes a new multi-dimensional evidence fusion component; Force Estimation (FE). Previous exploitation systems have demonstrated the positive impact on probability of detection and false alarm rates obtained by clustering vehicle detections into groups. FE, however, as a fusion component extends evidence accrual beyond simple spatial characteristics. Based on a fast multipole algorithm, FE accrues probabilistic evidence on models of military unit compositions. Fused evidence includes vehicle classifications, cultural and terrain features, and electronic emission features. FE's algorithmic speed allows operation in near-real-time without requiring excessive computational resources. FE has demonstrated improved force detection results over a wide range of operational conditions.
Competitive Intelligence (CI) is a systematic and ethical program for gathering and analyzing information about your competitors' activities and general business trends to further your own company's goals. CI allows companies to gather extensive information on their competitors and to analyze what the competition is doing in order to maintain or gain a competitive edge. In commercial business this potentially translates into millions of dollars in annual savings or losses. The Internet provides an overwhelming portal of information for CI analysis. The problem is how a company can automate the translation of voluminous information into valuable and actionable knowledge. This paper describes Project X, an agent-based data mining system specifically developed for extracting and analyzing competitive information from the Internet. Project X gathers CI information from a variety of sources including online newspapers, corporate websites, industry sector reporting sites, speech archiving sites, video news casts, stock news sites, weather sites, and rumor sites. It uses individual industry specific (e.g., pharmaceutical, financial, aerospace, etc.) commercial sector ontologies to form the knowledge filtering and discovery structures/content required to filter and identify valuable competitive knowledge. Project X is described in detail and an example competitive intelligence case is shown demonstrating the system's performance and utility for business intelligence.
The explosion of Information Technology (IT) in the commercial sector in the 1990's has led to billion dollar corporations overtaking the US Government (e.g., DARPA, NSF) as the leaders in IT research and development. The tenacity of the IT industry in accelerating technology development, in response to commercial demands, has actually provided government organizations with a unique opportunity to incorporate robust commercial IT into their individual applications. This development allows government agencies to focus their limited funds on the application aspects of their problems by leveraging commercial information technology developments. This paradigm applies directly to counterdrug enforcement and support. This paper describes a system that applies the state-of-the-art in information technology to news and information exploitation to produce a Multi-media Agent Monitoring and Assessment (MAMA) system capable of tracking information for field agent use, identifying assets of organizations and individuals for seizure, and disrupting drug shipping routes.
Asymmetric threats differ from the conventional force-on- force military encounters that the Defense Department has historically been trained to engage. Terrorism by its nature is now an operational activity that is neither easily detected or countered as its very existence depends on small covert attacks exploiting the element of surprise. But terrorism does have defined forms, motivations, tactics and organizational structure. Exploiting a terrorism taxonomy provides the opportunity to discover and assess knowledge of terrorist operations. This paper describes the Asymmetric Threat Terrorist Assessment, Countering, and Knowledge (ATTACK) system. ATTACK has been developed to (a) data mine open source intelligence (OSINT) information from web-based newspaper sources, video news web casts, and actual terrorist web sites, (b) evaluate this information against a terrorism taxonomy, (c) exploit country/region specific social, economic, political, and religious knowledge, and (d) discover and predict potential terrorist activities and association links. Details of the asymmetric threat structure and the ATTACK system architecture are presented with results of an actual terrorist data mining and knowledge discovery test case shown.
Synthetic aperture radar (SAR), electro-optical (EO) imagery, and motion-target indicators (MTI) are sensors utilized in prime surveillance/reconnaissance systems. Each sensor system can generate substantial volumes of data requiring vast computational resources to support exploitation, the problem is only aggravated in systems which support multiple sensor evidence fusion components. The dynamic nature of military operational settings often makes it difficult to efficiently apply the computational resources necessary for successful exploitation. Current limited research suggests that dynamic, scalable and heterogeneous computer systems may be an avenue to develop successful exploitation systems of the future. Existing research and development into systems of this type has not explicitly addressed their use for tactical imagery exploitation system. The Collaborative Heterogeneous Operations Prototype (CHOP) as part of DARPAs Scalable Tactical Imagery eXploitation (STIX) program conducted a review of existing state of the art commercial and non-commercial middleware and metasystem technology as applicable to STIX's technical objectives (the architectural characterization of a multiple source exploitation system appropriate for use in a dynamic military operational setting). The results of that review resulted in the design and development of a heterogeneous metasystem demonstrating state of the art near real time exploitation technology in the solution of multiple source imagery exploitation problems. The system was evaluated against measures of effectiveness and measures of performance applied to previously built and fielded imagery exploitation systems; its performance was consistent with those systems. CHOP STIX, however, offered automatic dynamic system reconfiguration to maximize system performance in an environment of changing mission requirements, sensor selection, data load, and computational resource availability. These characteristics made CHOP STIX appropriate for use in a wider range of operational settings than existing systems.
Video image exploitation is an increasingly crucial component of battlefield surveillance systems. In order to address the present difficulties pertaining to video exploitation of tactical sensors, DARPA has developed the Airborne Video Surveillance (AVS) program. AVS will utilize Electro-Optical (EO) and Infrared (IR) video imagery similar to that available from current and future Unmanned Aerial Vehicle (UAV) systems. The AVS program will include the development, integration, and evaluation of technologies pertaining to precision video registration, multiple target surveillance, and automated activity monitoring into a system capable of real-time UAV video exploitation. When combined with existing EO and IR target recognition algorithms, AVS will provide the Warfighter with a comprehensive video battlespace awareness capability.
In today's imaging paradigm, each platform feds a single exploitation feeds a single exploitation systems a single sensor data stream. Currently, there is no ability to integrate the many exploitation capabilities arising from the ever-increasing number of imaging platforms. The solution to this dilemma is the development of a battlespace exploitation visualization environment (BEVE) capable of providing real-time visualization of multi-sensor data streams to image analysts (IAs). The vision of BEVE is a system receiving a variety of imaging data types, integrating the results of a data fusion analysis, and visually fusing this data into a variety of exploitable visualizations. This paper discuses three primary technologies related to BEVE: the processing of the input sensor data, the visualization technologies, and the interpretation and interaction with the IA.
The quantity and quality of data collected by military and commercial electro-optic and radar sensors is rapidly increasing. This increase in imagery data has not been accompanied by an increase in the number of image analysts needed to rapidly screen the imagery to locate and identify military targets or other objects of interest. Automatic target recognition (ATR) technology that automates the target detection, classification, and identification process has been a promising technology for at least two decades, and recent advances make the realization of aided target recognition possible. The future military battlespace will be filled with airborne, spaceborne, and land-based sensors observing moving and stationary targets at various locations, from multiple aspects, and at multiple frequencies and wavelengths. Only through the use of computer-assisted data analysis and ATR can the vast amount of data be analyzed within the timelines required by the military. The Defense Advanced Research Projects Agency (DARPA) has a number of programs developing technology to support the exploitation and control of the future battlespace information.
The objective of the SAIP ACTD is to make imagery a responsive contributing source to a commander's overall battlespace awareness by focusing on theater and tactical sensor exploitation. The goal of the exploitation system is to increase the image analyst efficiency in exploiting large volumes of image data produced by current and future theater and tactical imaging platforms. The system will be evaluated based on its ability to improve the analyst's capability to detect and recognize isolated targets, minimize false alarms, recognize force structure (e.g. maneuver battalions), and provide a closed loop cueing of spot mode from strip imagery.
Unmanned guided vehicles (UGV) require the ability to visually understand the objects contained within their operating environments in order to locally guide vehicles along a globally determined route. Several large scale programs have been funded over the past decade that have created multimillion dollar prototype vehicles incapable of functioning outside of their initial test track environment. This paper describes the Unmanned Guided Vehicle System (UGVS) developed for the US Army Missile Command for operation in natural terrain. The goal of UGVS is to develop a real-time system adaptive to a range of terrain environments (e.g. roads, open fields, wooded clearings, forest areas) and seasonal conditions (e.g., fall, winter, summer, spring). UGVS consists of two primary processing activities. First, the UGVS vision system is tasked with determining the location of gravel roads in video imagery, detecting obstacles in the vehicles path, identifying distant road spurs, and assigning a classification confidence to each image component. Second, the guidance and navigation system computes the global route the vehicle should pursue, utilizes image classification results to determine obstructions in the local vehicle path, computes navigation commands to drive the vehicle around hazardous obstacles, correlates visual road spur cues with global route digital maps, and provides the navigation commands to move the vehicle forward. Results of UGVS working in a variety terrain environments are presented to reinforce system concepts.
Atlanta will be the home of several special events during the next five years ranging from the 1996 Olympics to the 1994 Super Bowl. When combined with the existing special events (Braves, Falcons, and Hawks games, concerts, festivals, etc.), the need to effectively manage traffic flow from surface streets to interstate highways is apparent. This paper describes a system for traffic event response and management for intelligent navigation utilizing signals (TERMINUS) developed at Georgia Tech for adaptively managing special event traffic flows in the Atlanta, Georgia area. TERMINUS (the original name given Atlanta, Georgia based upon its role as a rail line terminating center) is an intelligent surface street signal control system designed to manage traffic flow in Metro Atlanta. The system consists of three components. The first is a traffic simulation of the downtown Atlanta area around Fulton County Stadium that models the flow of traffic when a stadium event lets out. Parameters for the surrounding area include modeling for events during various times of day (such as rush hour). The second component is a computer graphics interface with the simulation that shows the traffic flows achieved based upon intelligent control system execution. The final component is the intelligent control system that manages surface street light signals based upon feedback from control sensors that dynamically adapt the intelligent controller's decision making process. The intelligent controller is a neural network model that allows TERMINUS to control the configuration of surface street signals to optimize the flow of traffic away from special events.
The task of 3-D object recognition can be viewed as consisting of four modules: extraction of structural descriptions, hypothesis generation, pose estimation, and hypothesis verification. The recognition time is determined by the efficiency of each of the four modules, but particularly on the hypothesis generation module which determines how many pose estimates and verifications must be done to recognize the object. In this paper, a set of high-order perspective-invariant relations are defined which can be used with a neural network algorithm to obtain a high-quality set of model-image matches between a model and image of a robot workstation. Using these matches, the number of hypotheses which must be generated to find a correct pose is greatly reduced.
Target recognition systems have undergone a variety of changes over the past 20 years. Initial systems exploited signal processing techniques to detect ground-based targets based on one-dimensional signals. Limitations of these systems eventually led to the development of automated target recognizers (ATRs) that processed two-dimensional digital images to detect, classify, and identify targets. Though their performance exceeded that of signal processing systems, ATRs exhibited several deficiencies to which artificial intelligence (Al) offered numerous potential solutions. This paper reviews the evolution of target recognition systems with primary focus on Al applications. Deficiencies of Al approaches to target recognition are presented and complemented by a discussion of a blackboard-based ATR system currently being developed at Georgia Tech.
A primary concern of target recognition systems is the actual detection of targets in a scene. Whereas identification assigns each detected target to a class, targets that go undetected are not considered. This leads to deteriorating system performance. This paper describes a neural network approach to the target recognition problem that exploits target composition in conjunction with structure to detect targets. An automatic target recognition architecture is presented to identify how the neural network environment may be integrated into existing systems. The neural network system is described in detail and the training process delineated to show the actual implementation and training issues involved. Performance of the system is documented through evaluative analysis of results generated using an infrared image database. The research performed to date is summarized and a discussion of future developments and their complement to the target recognition process is presented.