Over the last five to seven years the use of chat in military contexts has expanded quite significantly, in some cases
becoming a primary means of communicating time-sensitive data to decision makers and operators. For example, during
humanitarian operations with Joint Task Force-Katrina, chat was used extensively to plan, task, and coordinate predeployment
and ongoing operations. The informal nature of chat communications allows the relay of far more
information than the technical content of messages. Unlike formal documents such as newspapers, chat is often emotive.
"Reading between the lines" to understand the connotative meaning of communication exchanges is now feasible, and
often important. Understanding the connotative meaning of text is necessary to enable more useful automatic
intelligence exploitation. The research project described in this paper was directed at recognizing user connotations of
uncertainty and urgency. The project built a matrix of speech features indicative of these categories of meaning,
developed data mining software to recognize them, and evaluated the results.
Future Intelligence, Surveillance and Reconnaissance (ISR) tasking and exploitation will be based on a "system of
systems" that carries out tasking, collection, integration, interpretation, and exploitation. The vision is of a closedloop
tasking-exploitation-tasking ISR information system that learns from its continuous data accumulation over
multiple observations, accruing and assessing evidence to determine if further tasking is needed to resolve residual
target ambiguities. That closed-loop collection of systems would provide a better ability to direct ISR sensors and
fuse multisource data. Such a system, with the enormous amounts of data involved and the requirement for
timeliness, will require the use of automated systems that work together efficiently under real-world conditions. This
paper reviews issues that are relevant to ISR tasking, coordination, and data formatting. Procedural solutions that
were developed and implemented during experimental operations to correlate and fuse full motion video with
ground moving target information forming real-time, actionable, coalition intelligence, are presented.
Unattended Ground Sensor (UGS) systems typically employ distributed sensor nodes utilizing seismic, magnetic or
passive IR sensing modalities to alarm if activity is present. The use of an imaging component to verify sensor events is
beneficial to create actionable intelligence. Integration of the ground-based images with other ISR data requires that the
images contain valid activity and are appropriately formatted, such as prescribed by Standard NATO Agreement
(STANAG) 4545 or the National Imagery Transmission Format, version 2.1 (NITF 2.1).
Ground activity sensors suffer from false alarms due to meteorological or biological activity. The addition of imaging
allows the analyst to differentiate valid threats from nuisance alarms. Images are prescreened based on target size and
temperature difference relative to the background. The combination of video motion detection based on thermal imaging
with seismic, magnetic or passive IR sensing modalities improves data quality through multi-phenomenon combinatorial
logic. The ground-based images having a nominally vertical aspect are transformed to the horizontal geospatial domain
for exploitation and correlation of UGS imagery with other ISR data and for efficient archive and retrieval purposes.
The description of an UGS system utilized and solutions that were developed and implemented during an experiment to
correlate and fuse IR still imagery with ground moving target information, forming real-time, actionable, coalition
intelligence, are presented.
Future battlespaces will contain large numbers of varied sensors deployed on the ground, in the air, and in space. Military commanders will make more effective decisions if sensor data is fused to provide a cohesive picture of their battlespace environment. The Air Force Research Laboratory Information Directorate (AFRL/IF) has developed a testbed within which to integrate, evaluate, and demonstrate fusion and information technologies to support and facilitate the sharing and exploitation of data from a variety of sensors. The Fusion Testbed is used to support analytical studies, on-site and network distributed simulation exercises, and the processing of real-world, multiple source intelligence (multi-INT) data. Varied scenario simulation tools, platform and sensor models (including JSTARS, U2, and Global Hawk), data simulators for GMTI, ELINT and MASINT along with operational systems (including MTIX and KAST), and highly developed multi-INT data fusion systems are available for application to the problem of ground target identification and tracking against a variety of operational scenarios. Scenario animations display simulation environment activities and unique automated analytical tools quantify established Measures of Performance (MOPs). In total, the Fusion Testbed facilitates a broad range of command, control, intelligence, surveillance, reconnaissance (C2ISR), and fusion technology developments. This paper describes the AFRL Fusion Testbed component capabilities and operationally-focused applications.
The Air Force Research Laboratory Multi-Sensor Exploitation Branch (AFRL/IFEC) has been a Department of Defense leader in research and development (R&D) in speech and audio processing for over 25 years. Their primary thrust in these R&D areas has focused on developing technology to improve the collection, handling, identification, and intelligibility of military communication signals. The National Law Enforcement and Corrections Technology Center for the Northeast (NLECTC-NE) is collocated with the AFRL Rome Research Site<SUP>d</SUP> at the Griffiss Technology park in upstate New York. The NLECTC-NE supports sixteen (16) states in the northeast sector of the United States, and is funded and supported by the National Institute of Justice (NIJ). Since the inception of the NLECTC-NE in 1995, the AFRL Rome Research Site has expanded the military applications of their expertise to address law enforcement and corrections requirements. AFRL/IFEC's speech and audio processing technology is unique and particularly appropriate for application to law enforcement requirements. It addresses the similar military need for time-critical decisions and actions, operation within noisy environments, and use by uncooperative speakers in tactical, real-time applications. Audio and speech processing technology for both application domains must also often deal with short utterance communications (less than five seconds of speech) and transmission-to-transmission channel variability.
Rome Laboratory, one of the United States Air Force's four Super Laboratories, has been designated by the National Institute of Justice (NIJ) to be its National Law Enforcement and Corrections Technology Center for the Northeast (NLECTC-NE). A Department of Defense leader in research and development (R&D) in speech and audio processing for over 25 years, Rome Laboratory's main thrust in these R&D areas has focused on developing technology to improve the collection, handling, identification and intelligibility of communication signals. Rome Laboratory speech and audio technology is unique and particularly appropriate for application to law enforcement requirements because it addresses the military need for time critical decisions and actions, operating within noisy environments, and use by uncooperative speakers in tactical, real-time applications. Speech enhancement and speaker recognition are the primary technologies discussed in this paper. Automatic language and dialect identification, automatic gisting, spoken language translation, co-channel speaker separation and audio manipulation technologies are briefly discussed.