Artificial Intelligence (AI) technology is being applied successfully in a number of domains. Advances in low cost, high performance computing platforms have made AI approaches sufficiently scalable to be applied in high volume, commercial applications. The true promise of AI in modeling human intelligence remains elusive. Current approaches can simulate a small subset of the many processes that make up human cognition, and yet it would be of huge benefit to be able to integrate expert human decision making in AI applications. In this paper, we present a pragmatic approach that can be used to capture expert human decision making within a limited domain of expertise. We propose an approach that automates the Analytic Hierarchy Process in order to capture a model of expert decision making from observational data. While this is not a general solution, it provides a workable approach for AI applications dealing with well defined, limited domains of knowledge.
Today’s warfighters operate in a highly dynamic and uncertain world, and face many competing demands. Asymmetric warfare and the new focus on small, agile forces has altered the framework by which time critical information is digested and acted upon by decision makers. Finding and integrating decision-relevant information is increasingly difficult in data-dense environments. In this new information environment, agile data algorithms, machine learning software, and threat alert mechanisms must be developed to automatically create alerts and drive quick response. Yet these advanced technologies must be balanced with awareness of the underlying context to accurately interpret machine-processed indicators and warnings and recommendations. One promising approach to this challenge brings together information retrieval strategies from text, video, and imagery. In this paper, we describe a technology demonstration that represents two years of tri-service research seeking to meld text and video for enhanced content awareness. The demonstration used multisource data to find an intelligence solution to a problem using a common dataset. Three technology highlights from this effort include 1) Incorporation of external sources of context into imagery normalcy modeling and anomaly detection capabilities, 2) Automated discovery and monitoring of targeted users from social media text, regardless of language, and 3) The concurrent use of text and imagery to characterize behaviour using the concept of kinematic and text motifs to detect novel and anomalous patterns. Our demonstration provided a technology baseline for exploiting heterogeneous data sources to deliver timely and accurate synopses of data that contribute to a dynamic and comprehensive worldview.
Intelligence analysts and military decision makers are faced with an onslaught of information. From the now ubiquitous presence of intelligence, surveillance, and reconnaissance (ISR) platforms providing large volumes of sensor data, to vast amounts of open source data in the form of news reports, blog postings, or social media postings, the amount of information available to a modern decision maker is staggering. Whether tasked with leading a military campaign or providing support for a humanitarian mission, being able to make sense of all the information available is a challenge. Due to the volume and velocity of this data, automated tools are required to help support reasoned, human decisions. In this paper we describe several automated techniques that are targeted at supporting decision making. Our approaches include modeling the kinematics of moving targets as motifs; developing normalcy models and detecting anomalies in kinematic data; automatically classifying the roles of users in social media; and modeling geo-spatial regions based on the behavior that takes place in them. These techniques cover a wide-range of potential decision maker needs.
A key enabler for Network Centric Warfare (NCW) is a sensor network that can collect and fuse vast amounts
of disparate and complementary information from sensors that are geographically dispersed throughout the
battlespace. This information will lead to better situation awareness so that commanders will be able to act faster
and more effectively. However, these benefits are possible only if the sensor data can be fused and synthesized for
distribution to the right user in the right form at the right time within the constraints of available bandwidth.
In this paper we consider the problem of developing Level 1 data fusion algorithms for disparate fusion in
NCW. These algorithms must be capable of operating in a fully distributed (or decentralized) manner; must be
able to scale to extremely large numbers of entities; and must be able to combine many disparate types of data.
To meet these needs we propose a framework that consists of three main components: an attribute-based
state representation that treats an entity state as a collection of attributes, new methods or interpretations of
uncertainty, and robust algorithms for distributed data fusion. We illustrate the discussion in the context of
maritime domain awareness, mobile adhoc networks, and multispectral image fusion.
This paper will discuss research conducted at the Naval Research Laboratory in the area of automated routing, advanced 3D displays and novel interface techniques for interacting with those displays. This research has culminated in the development of the strike optimized mission planing module (STOMPM). The STOMPM testbed incorporates new technologies/results in the aforementioned areas to address the deficiencies in current systems and advance the state of the art in military planing systems.
Conference Committee Involvement (3)
Next-Generation Analyst VI
16 April 2018 | Orlando, Florida, United States
Next-Generation Analyst V
10 April 2017 | Anaheim, California, United States
Next-Generation Analyst IV
18 April 2016 | Baltimore, Maryland, United States