Current decision making processes separate the intelligence tasks from the operations tasks. This creates a system that is reactive rather than proactive, leaving potential gains in the timeliness and quality of responding to a situation of interest. In this paper we will present a new optimization paradigm that combines the tasking of intelligence, surveillance, and reconnaissance (ISR) assets with the tasks and needs of operational assets. Some of the collection assets will be dedicated for one function or another, while a third category that could perform both will also be considered. We will use a scenario to demonstrate the value of the merger by presenting the impact to a number of intelligence and operations measures of performance and effectiveness (MOPS/MOES). Using this framework, mission readiness and execution assessment for a simulated humanitarian assistance/disaster relief (HADR) mission is monitored for tasks on intelligence gathering, distribution of supplies, and repair of vital lanes of transportation, during the relief effort. The results demonstrate a significant improvement to measures of performance when intelligence tasking takes operational objectives into consideration.
Machine Reasoning and Intelligence is usually done in a vacuum, without consultation of the ultimate decision-maker. The late consideration of the human cognitive process causes some major problems in the use of automated systems to provide reliable and actionable information that users can trust and depend to make the best Course-of-Action (COA). On the other hand, if automated systems are created exclusively based on human cognition, then there is a danger of developing systems that don’t push the barrier of technology and are mainly done for the comfort level of selected subject matter experts (SMEs). Our approach to combining human and machine processes (CHAMP) is based on the notion of developing optimal strategies for where, when, how, and which human intelligence should be injected within a machine reasoning and intelligence process. This combination is based on the criteria of improving the quality of the output of the automated process while maintaining the required computational efficiency for a COA to be actuated in timely fashion. This research addresses the following problem areas:
Providing consistency within a mission: Injection of human reasoning and intelligence within the reliability and temporal needs of a mission to attain situational awareness, impact assessment, and COA development.
Supporting the incorporation of data that is uncertain, incomplete, imprecise and contradictory (UIIC): Development of mathematical models to suggest the insertion of a cognitive process within a machine reasoning and intelligent system so as to minimize UIIC concerns.
Developing systems that include humans in the loop whose performance can be analyzed and understood to provide feedback to the sensors.
Synchronization of Intelligence, Surveillance, and Reconnaissance (ISR) activities to maximize the utilization of limited resources (both in terms of quantity and capability) has become critically important to military forces. In centralized frameworks, a single node is responsible for determining and disseminating decisions (e.g., tasks assignments) to all nodes in the network. This requires a robust and reliable communication network. In decentralized frameworks, processing of information and decision making occur at different nodes in the network, reducing the communication requirements. This research studies the degradation of solution quality (i.e., potential information gain) as a centralized system synchronizing ISR activities moves to a decentralized framework. The mathematical programming model of previous work1 has been extended for multi-perspective optimization in which each collection asset develops its own decisions to support mission objectives based only on its perspective of the environment. Different communication strategy are considered. Collection assets are part of the same communication network (i.e., a connected component) if: (1) a fully connected network exists between the assets in the connected component, or (2) a path (consisting of one or more communication links) between every asset in the connected component exists. Multiple connected components may exist among the available collection assets supporting a mission. Information is only exchanged when assets are part of the same network. The potential location of assets that are not part of a connected component can be considered (with a suitable decay factor as a function of time) as part of the optimization model.
Roles and capabilities of analysts are changing as the volume of data grows. Open-source content is abundant and users
are becoming increasingly dependent on automated capabilities to sift and correlate information. Entity resolution is one
such capability. It is an algorithm that links entities using an arbitrary number of criteria (e.g., identifiers, attributes)
from multiple sources. This paper demonstrates a prototype capability, which identifies enriched attributes of individuals
stored across multiple sources. Here, the system first completes its processing on a cloud-computing cluster. Then, in a
data explorer role, the analyst evaluates whether automated results are correct and whether attribute enrichment
improves knowledge discovery.
Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT’s visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.
One of the main technical challenges facing intelligence analysts today is effectively determining information gaps from huge amounts of collected data. Moreover, getting the right information to/from the right person (e.g., analyst, warfighter on the edge) at the right time in a distributed environment has been elusive to our military forces. Synchronization of Intelligence, Surveillance, and Reconnaissance (ISR) activities to maximize the efficient utilization of limited resources (both in quantity and capabilities) has become critically important to increase the accuracy and timeliness of overall information gain. Given this reality, we are interested in quantifying the degradation of solution quality (i.e., information gain) as a centralized system synchronizing ISR activities (from information gap identification to information collection and dissemination) moves to a more decentralized framework. This evaluation extends the concept of price of anarchy, a measure of the inefficiency of a system when agents maximize decisions without coordination, by considering different levels of decentralization. Our initial research representing the potential information gain in geospatial and time discretized spaces is presented. This potential information gain map can represent a consolidation of Intelligence Preparation of the Battlefield products as input to automated ISR synchronization tools. Using the coordination of unmanned vehicles (UxVs) as an example, we developed a mathematical programming model for multi-perspective optimization in which each UxV develops its own fight plan to support mission objectives based only on its perspective of the environment (i.e., potential information gain map). Information is only exchanged when UxVs are part of the same communication network.
There has been significant progress recognizing the value of Intelligence, Surveillance, and Reconnaissance (ISR)
activities supporting Situational Awareness and Command and Control functions during the past several decades. We
consider ISR operations to be proactive (discovering activities or areas of interest), active (activities performed for a
particular task that flows down from a hierarchical process) or reactive (critical information gathering due to unexpected
events). ISR synchronization includes the analysis and prioritization of information requirements, identification of
intelligence gaps and the recommendation of available resources to gather information of interest, for all types of ISR
operations. It has become critically important to perform synchronized ISR activities to maximize the efficient
utilization of limited resources (both in quantity and capabilities) and, simultaneously, to increase the accuracy and
timeliness of the information gain. A study evaluating the existing technologies and processes supporting ISR activities
is performed suggesting a rigorous system optimization approach to the ISR synchronization process. Unfortunately,
this approach is not used today. The study identifies existing gaps between the current ISR synchronization process and
the proposed system optimization approach in the areas of communication and collaboration tools and advanced decision
aids (analytics). Solutions are recommended that will help close this gap.
Understanding the structure and dynamics of
networks are of vital importance to winning the global war on
terror. To fully comprehend the network environment, analysts
must be able to investigate interconnected relationships of many
diverse network types simultaneously as they evolve both
spatially and temporally. To remove the burden from the analyst
of making mental correlations of observations and conclusions
from multiple domains, we introduce the Dynamic Graph
Analytic Framework (DYGRAF). DYGRAF provides the
infrastructure which facilitates a layered multi-modal network
analysis (LMMNA) approach that enables analysts to assemble
previously disconnected, yet related, networks in a common
battle space picture. In doing so, DYGRAF provides the analyst
with timely situation awareness, understanding and anticipation
of threats, and support for effective decision-making in diverse
Information Fusion Engine for Real-time Decision Making (INFERD) is a tool that was developed to supplement current graph matching techniques in Information Fusion models. Based on sensory data and a priori models, INFERD dynamically generates, evolves, and evaluates hypothesis on the current state of the environment. The a priori models developed are hierarchical in nature lending them to a multi-level Information Fusion process whose primary output provides a situational awareness of the environment of interest in the context of the models running. In this paper we look at INFERD's multi-level fusion approach and provide insight on the inherent problems such as fragmentation in the approach and the research being undertaken to mitigate those deficiencies. Due to the large variance of data in disparate environments, the awareness of situations in those environments can be drastically different. To accommodate this, the INFERD framework provides support for plug-and-play fusion modules which can be developed specifically for domains of interest. However, because the models running in INFERD are graph based, some default measurements can be provided and will be discussed in the paper. Among these are a Depth measurement to determine how much danger is presented by the action taking place, a Breadth measurement to gain information regarding the scale of an attack that is currently happening, and finally a Reliability measure to tell the user the credibility of a particular hypothesis. All of these results will be demonstrated in the Cyber domain where recent research has shown to be an area that is welldefined and bounded, so that new models and algorithms can be developed and evaluated.
Current practice for combating cyber attacks typically use Intrusion Detection Sensors (IDSs) to passively detect and block multi-stage attacks. This work leverages Level-2 fusion that correlates IDS alerts belonging to the same attacker, and proposes a threat assessment algorithm to predict potential future attacker actions. The algorithm, TANDI, reduces the problem complexity by separating the models of the attacker's capability and opportunity, and fuse the two to determine the attacker's intent. Unlike traditional Bayesian-based approaches, which require assigning a large number of edge probabilities, the proposed Level-3 fusion procedure uses only 4 parameters. TANDI has been implemented and tested with randomly created attack sequences. The results demonstrate that TANDI predicts future attack actions accurately as long as the attack is not part of a coordinated attack and contains no insider threats. In the presence of abnormal attack events, TANDI will alarm the network analyst for further analysis. The attempt to evaluate a threat assessment algorithm via simulation is the first in the literature, and shall open up a new avenue in the area of high level fusion.
As technology continues to advance, services and capabilities become computerized, and an ever increasing amount of business is conducted electronically the threat of cyber attacks gets compounded by the complexity of such attacks and the criticality of the information which must be secured. A new age of virtual warfare has dawned in which seconds can differentiate between the protection of vital information and/or services and a malicious attacker attaining their goal. In this paper we present a novel approach in the real-time detection of multistage coordinated cyber attacks and the promising initial testing results we have obtained. We introduce INFERD (INformation Fusion Engine for Real-time Decision-making), an adaptable information fusion engine which performs fusion at levels zero, one, and two to provide real-time situational assessment and its application to the cyber domain in the ECCARS (Event Correlation for Cyber Attack Recognition System) system. The advantages to our approach are fourfold: (1) The complexity of the attacks which we consider, (2) the level of abstraction in which the analyst interacts with the attack scenarios, (3) the speed at which the information fusion is presented and performed, and (4) our disregard for ad-hoc rules or a priori parameters.