This paper describes characteristics of information flow on social channels, as a function of content type and relations
among individual sources, distilled from analysis of Twitter data as well as human subject survey results. The working
hypothesis is that individuals who propagate content on social media act (e.g., decide whether to relay information or
not) in accordance with their understanding of the content, as well as their own beliefs and trust relations. Hence, the
resulting aggregate content propagation pattern encodes the collective content interpretation of the underlying group, as
well as their relations. Analysis algorithms are described to recover such relations from the observed propagation
patterns as well as improve our understanding of the content itself in a language agnostic manner simply from its
propagation characteristics. An example is to measure the degree of community polarization around contentious topics,
identify the factions involved, and recognize their individual views on issues. The analysis is independent of the
language of discourse itself, making it valuable for multilingual media, where the number of languages used may render
language-specific analysis less scalable.
Over the last 70 years there has been a major shift in the threats to global peace. While the 1950’s and 1960’s were characterised by the cold war and the arms race, many security threats are now characterised by group behaviours that are disruptive, subversive or extreme. In many cases such groups are loosely and chaotically organised, but their ideals are sociologically and psychologically embedded in group members to the extent that the group represents a major threat. As a result, insights into how human groups form, emerge and change are critical, but surprisingly limited insights into the mutability of human groups exist. In this paper we argue that important clues to understand the mutability of groups come from examining the evolutionary origins of human behaviour. In particular, groups have been instrumental in human evolution, used as a basis to derive survival advantage, leaving all humans with a basic disposition to navigate the world through social networking and managing their presence in a group. From this analysis we present five critical features of social groups that govern mutability, relating to social norms, individual standing, status rivalry, ingroup bias and cooperation. We argue that understanding how these five dimensions interact and evolve can provide new insights into group mutation and evolution. Importantly, these features lend themselves to digital modeling. Therefore computational simulation can support generative exploration of groups and the discovery of latent factors, relevant to both internal group and external group modelling. Finally we consider the role of online social media in relation to understanding the mutability of groups. This can play an active role in supporting collective behaviour, and analysis of social media in the context of the five dimensions of group mutability provides a fresh basis to interpret the forces affecting groups.
Today’s battlefields are shifting to “denied areas”, where the use of U.S. Military air and ground assets is
limited. To succeed, the U.S. intelligence analysts increasingly rely on available open-source intelligence
(OSINT) which is fraught with inconsistencies, biased reporting and fake news. Analysts need automated
tools for retrieval of information from OSINT sources, and these solutions must identify and resolve
conflicting and deceptive information.
In this paper, we present a misinformation detection model (MDM) which converts text to attributed
knowledge graphs and runs graph-based analytics to identify misinformation. At the core of our solution is
identification of knowledge conflicts in the fused multi-source knowledge graph, and semi-supervised
learning to compute locally consistent reliability and credibility scores for the documents and sources,
respectively. We present validation of proposed method using an open source dataset constructed from the
online investigations of MH17 downing in Eastern Ukraine.
The Human-Assisted Machine Information Exploitation (HAMIE) investigation utilizes large-scale online data
collection for developing models of information-based problem solving (IBPS) behavior in a simulated time-critical
operational environment. These types of environments are characteristic of intelligence workflow processes conducted
during human-geo-political unrest situations when the ability to make the best decision at the right time ensures strategic
overmatch. The project takes a systems approach to Human Information Interaction (HII) by harnessing the expertise of
crowds to model the interaction of the information consumer and the information required to solve a problem at different
levels of system restrictiveness and decisional guidance. The design variables derived from Decision Support Systems
(DSS) research represent the experimental conditions in this online single-player against-the-clock game where the
player, acting in the role of an intelligence analyst, is tasked with a Commander’s Critical Information Requirement
(CCIR) in an information overload scenario. The player performs a sequence of three information processing tasks
(annotation, relation identification, and link diagram formation) with the assistance of ‘HAMIE the robot’ who offers
varying levels of information understanding dependent on question complexity. We provide preliminary results from a
pilot study conducted with Amazon Mechanical Turk (AMT) participants on the Volunteer Science scientific research
Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities
of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed
team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user
interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data
with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to
provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision
making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions,
update models, and store results for distributed decision making.
Wargaming is a process of thinking through and visualizing events that could occur during a possible course of action. Over the past 200 years, wargaming has matured into a set of formalized processes. One area of growing interest is the application of agent-based modeling. Agent-based modeling and its additional supporting technologies has potential to introduce a third-generation wargaming capability to the Army, creating a positive overmatch decision-making capability. In its simplest form, agent-based modeling is a computational technique that helps the modeler understand and simulate how the "whole of a system" responds to change over time. It provides a decentralized method of looking at situations where individual agents are instantiated within an environment, interact with each other, and empowered to make their own decisions. However, this technology is not without its own risks and limitations. This paper explores a technology roadmap, identifying research topics that could realize agent-based modeling within a tactical wargaming context.
While the term Internet of Things (IoT) has been coined relatively recently, it has deep roots in multiple other areas of
research including cyber-physical systems, pervasive and ubiquitous computing, embedded systems, mobile ad-hoc
networks, wireless sensor networks, cellular networks, wearable computing, cloud computing, big data analytics, and
intelligent agents. As the Internet of Things, these technologies have created a landscape of diverse heterogeneous
capabilities and protocols that will require adaptive controls to effect linkages and changes that are useful to end users. In
the context of military applications, it will be necessary to integrate disparate IoT devices into a common platform that
necessarily must interoperate with proprietary military protocols, data structures, and systems. In this environment, IoT
devices and data will not be homogeneous and provenance-controlled (i.e. single vendor/source/supplier owned). This
paper presents a discussion of the challenges of integrating varied IoT devices and related software in a military
environment. A review of contemporary commercial IoT protocols is given and as a practical example, a middleware
implementation is proffered that provides transparent interoperability through a proactive message dissemination system.
The implementation is described as a framework through which military applications can integrate and utilize
commercial IoT in conjunction with existing military sensor networks and command and control (C2) systems.
Intelligence Analysis remains a manual process despite trends toward autonomy in information processing. Analysts need agile decision-‐support tools that can adapt to the evolving information needs of the mission, allowing the analyst to pose novel analytic questions. Our research enables the analysts to only provide a constrained English specification of what the intelligence product should be. Using HTN planning, the autonomy discovers, decides, and generates a workflow of algorithms to create the intelligence product. Therefore, the analyst can quickly and naturally communicate to the autonomy what information product is needed, rather than how to create it.
Virtual and mixed reality technology has advanced tremendously over the past several years. This nascent medium has the potential to transform how people communicate over distance, train for unfamiliar tasks, operate in challenging environments, and how they visualize, interact, and make decisions based on complex data. At the same time, the marketplace has experienced a proliferation of network-connected devices and generalized sensors that are becoming increasingly accessible and ubiquitous. As the Internet of Things" expands to encompass a predicted 50 billion connected devices by 2020, the volume and complexity of information generated in pervasive and virtualized environments will continue to grow exponentially. The convergence of these trends demands a theoretically grounded research agenda that can address emerging challenges for human-information interaction (HII). Virtual and mixed reality environments can provide controlled settings where HII phenomena can be observed and measured, new theories developed, and novel algorithms and interaction techniques evaluated. In this paper, we describe the intersection of pervasive computing with virtual and mixed reality, identify current research gaps and opportunities to advance the fundamental understanding of HII, and discuss implications for the design and development of cyber-human systems for both military and civilian use.
Many defense problems are time-dominant: attacks progress at speeds that outpace human-centric systems designed for
monitoring and response. Despite this shortcoming, these well-honed and ostensibly reliable systems pervade most
domains, including cyberspace. The argument that often prevails when considering the automation of defense is that
while technological systems are suitable for simple, well-defined tasks, only humans possess sufficiently nuanced
understanding of problems to act appropriately under complicated circumstances. While this perspective is founded in
verifiable truths, it does not account for a middle ground in which human-managed technological capabilities extend
well into the territory of complex reasoning, thereby automating more nuanced sense-making and dramatically
increasing the speed at which it can be applied. Snort1 and platforms like it enable humans to build, refine, and deploy
sense-making tools for network defense. Shortcomings of these platforms include a reliance on rule-based logic, which
confounds analyst knowledge of how bad actors behave with the means by which bad behaviors can be detected, and a
lack of feedback-informed automation of sensor deployment. We propose an approach in which human-specified
computational models hypothesize bad behaviors independent of indicators and then allocate sensors to estimate and
forecast the state of an intrusion. State estimates and forecasts inform the proactive deployment of additional sensors
and detection logic, thereby closing the sense-making loop. All the while, humans are on the loop, rather than in it,
permitting nuanced management of fast-acting automated measurement, detection, and inference engines. This paper
motivates and conceptualizes analytics to facilitate this human-machine partnership.
One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.
Deep Learning has proven to be an effective method for making highly accurate predictions from complex data sources. Convolutional neural networks continue to dominate image classification problems and recursive neural networks have proven their utility in caption generation and language translations. While these approaches are powerful, they do not offer explanation for how the output is generated. Without understanding how deep learning arrives at a solution there is no guarantee that these networks will transition from controlled laboratory environments to fieldable systems. This paper presents an approach for incorporating such rule based methodology into neural networks by embedding fuzzy inference systems into deep learning networks.
As globalization affects most aspects of modern life, challenges of quick and flexible data sharing apply to many
different domains. To protect a nation’s security for example, one has to look well beyond borders and understand
economical, ecological, cultural as well as historical influences. Most of the time information is produced and stored
digitally and one of the biggest challenges is to receive relevant readable information applicable to a specific problem out
of a large data stock at the right time.
These challenges to enable data sharing across national, organizational and systems borders are known to other domains
(e.g., ecology or medicine) as well. Solutions like specific standards have been worked on for the specific problems. The
question is: what can the different domains learn from each other and do we have solutions when we need to interlink the
information produced in these domains?
A known problem is to make civil security data available to the military domain and vice versa in collaborative
operations. But what happens if an environmental crisis leads to the need to quickly cooperate with civil or military
security in order to save lives? How can we achieve interoperability in such complex scenarios?
The paper introduces an approach to adapt standards from one domain to another and lines out problems that have to be
overcome and limitations that may apply.
Scientific and Technical (S and T) intelligence analysts consume huge amounts of data to understand how scientific
progress and engineering efforts affect current and future military capabilities. One of the most important types of
information S and T analysts exploit is the quantities discussed in their source material. Frequencies, ranges, size, weight,
power, and numerous other properties and measurements describing the performance characteristics of systems and the
engineering constraints that define them must be culled from source documents before quantified analysis can begin.
Automating the process of finding and extracting the relevant quantities from a wide range of S and T documents is
difficult because information about quantities and their units is often contained in unstructured text with ad hoc
conventions used to convey their meaning. Currently, even simple tasks, such as searching for documents discussing RF
frequencies in a band of interest, is a labor intensive and error prone process. This research addresses the challenges
facing development of a document processing capability that extracts quantities and units from S and T data, and how
Natural Language Processing algorithms can be used to overcome these challenges.
This paper describes some patterns for information security problems that consistently emerge
among traditional enterprise networks and applications, both with respect to cyber threats and data
sensitivity. We draw upon cases from qualitative studies and interviews of system developers, network
operators, and certifiers of military applications. Specifically, the problems discussed involve sensitivity of
data aggregates, training efficacy, and security decision support in the human machine interface. While
proven techniques can address many enterprise security challenges, we provide additional
recommendations on how to further improve overall security posture, and suggest additional research
thrusts to address areas where known gaps remain.
Creating entity network graphs is a manual, time consuming process for an intelligence analyst. Beyond the traditional
big data problems of information overload, individuals are often referred to by multiple names and shifting titles as they
advance in their organizations over time which quickly makes simple string or phonetic alignment methods for entities
insufficient. Conversely, automated methods for relationship extraction and entity disambiguation typically produce
questionable results with no way for users to vet results, correct mistakes or influence the algorithm’s future results.
We present an entity disambiguation tool, DRADIS, which aims to bridge the gap between human-centric and machinecentric
methods. DRADIS automatically extracts entities from multi-source datasets and models them as a complex set
of attributes and relationships. Entities are disambiguated across the corpus using a hierarchical model executed in
Spark allowing it to scale to operational sized data. Resolution results are presented to the analyst complete with
sourcing information for each mention and relationship allowing analysts to quickly vet the correctness of results as well
as correct mistakes. Corrected results are used by the system to refine the underlying model allowing analysts to
optimize the general model to better deal with their operational data. Providing analysts with the ability to validate and
correct the model to produce a system they can trust enables them to better focus their time on producing higher quality
The Service Oriented Architecture (SOA) model is fast gaining dominance in how software applications are built. They allow organizations to capitalize on existing services and share data amongst distributed applications. The automatic evaluation of SOA systems poses a challenging problem due to three factors: technological complexity, organizational incompatibility, and integration into existing development pipelines. In this paper we describe our experience in developing and deploying an automated evaluation capability for the Marine Corps’ Tactical Service Oriented Architecture (TSOA). We outline the technological, policy, and operational challenges we face and how we are addressing them.
Real-time Analytics Platform for Interactive Data-mining (RAPID), a collaboration of University of Melbourne and
Australia’s Defense Science and Technology Group (DSTG), consumes data streams, performs analytics computations,
and produces high-quality knowledge for analysts. RAPID takes topic seed words and autonomously identifies emerging
keywords in the data. Users direct the system, setting time-windowing parameters, thresholds, update intervals and
sample rates. Apache Storm and Apache Kafka permit real-time streaming while logging options support off-line
processing. Decision-support scenarios feature Commander Critical Information Requirements involving comparisons
over time and time-sequencing of events, capabilities particularly well-served by RAPID technology, to be demonstrated
in the presentation.
Intelligence, Surveillance, and Reconnaissance (ISR) operations center on providing relevant situational understanding
to military commanders and analysts to facilitate decision-making for execution of mission tasks. However, limitations
exist in tactical-edge environments on the ability to disseminate digital materials to analysts and decision makers. This
work investigates novel methods to calculate of Value of Information tied to digital materials (termed information
objects) for consumer use, based on interpretation of mission specifications. Followed by a short survey of related VoI
calculation efforts, discussion is provided on mission-centric VoI calculation for digital materials via adoption of the preexisting
Missions and Means Framework model.
Modern military intelligence operations involves a deluge of information from a large number of sources. A data ranking
algorithm that enables the most valuable information to be reviewed first may improve timely and effective analysis.
This ranking is termed the value of information (VoI) and its calculation is a current area of research within the US
Army Research Laboratory (ARL). ARL has conducted an experiment to correlate the perceptions of subject matter
experts with the ARL VoI model and additionally to construct a cognitive model of the ranking process and the
amalgamation of supporting and conflicting information.
In today’s battlefield environments, analysts are inundated with real-time data received from the tactical
edge that must be evaluated and used for managing and modifying current missions as well as planning
for future missions. This paper describes a framework that facilitates a Value of Information (VoI) based
data analytics tool for information object (IO) analysis in a tactical and command and control (C2)
environment, which reduces analyst work load by providing automated or analyst assisted applications. It
allows the analyst to adjust parameters for data matching of the IOs that will be received and provides
agents for further filtering or fusing of the incoming data. It allows for analyst enhancement and markup
to be made to and/or comments to be attached to the incoming IOs, which can then be re-disseminated
utilizing the VoI based dissemination service. The analyst may also adjust the underlying parameters
before re-dissemination of an IO, which will subsequently adjust the value of the IO based on this
new/additional information that has been added, possibly increasing the value from the original. The
framework is flexible and extendable, providing an easy to use, dynamically changing Command and
Control decision aid that focuses and enhances the analyst workflow.
Today’s warfighters operate in a highly dynamic and uncertain world, and face many competing demands. Asymmetric warfare and the new focus on small, agile forces has altered the framework by which time critical information is digested and acted upon by decision makers. Finding and integrating decision-relevant information is increasingly difficult in data-dense environments. In this new information environment, agile data algorithms, machine learning software, and threat alert mechanisms must be developed to automatically create alerts and drive quick response. Yet these advanced technologies must be balanced with awareness of the underlying context to accurately interpret machine-processed indicators and warnings and recommendations. One promising approach to this challenge brings together information retrieval strategies from text, video, and imagery. In this paper, we describe a technology demonstration that represents two years of tri-service research seeking to meld text and video for enhanced content awareness. The demonstration used multisource data to find an intelligence solution to a problem using a common dataset. Three technology highlights from this effort include 1) Incorporation of external sources of context into imagery normalcy modeling and anomaly detection capabilities, 2) Automated discovery and monitoring of targeted users from social media text, regardless of language, and 3) The concurrent use of text and imagery to characterize behaviour using the concept of kinematic and text motifs to detect novel and anomalous patterns. Our demonstration provided a technology baseline for exploiting heterogeneous data sources to deliver timely and accurate synopses of data that contribute to a dynamic and comprehensive worldview.