We have seen significant change in the study and practice of human reasoning in recent years from both a theoretical and methodological perspective. Ubiquitous communication coupled with advances in computing and a plethora of analytic support tools have created a push for instantaneous reporting and analysis. This notion is particularly prevalent in law enforcement, emergency services and the intelligence community (IC), where commanders (and their civilian leadership) expect not only a birds’ eye view of operations as they occur, but a play-by-play analysis of operational effectiveness. This paper explores the use of Red Cell Analytics (RCA) as pedagogy to train the next-gen analyst. A group of Penn State students in the College of Information Sciences and Technology at the University Park campus of The Pennsylvania State University have been practicing Red Team Analysis since 2008. RCA draws heavily from the military application of the same concept, except student RCA problems are typically on non-military in nature. RCA students utilize a suite of analytic tools and methods to explore and develop red-cell tactics, techniques and procedures (TTPs), and apply their tradecraft across a broad threat spectrum, from student-life issues to threats to national security. The strength of RCA is not always realized by the solution but by the exploration of the analytic pathway. This paper describes the concept and use of red cell analytics to teach and promote the use of structured analytic techniques, analytic writing and critical thinking in the area of security and risk and intelligence training.
We have previously argued that a combination of trends in information technology (IT) and changing
habits of people using IT provide opportunities for the emergence of a new generation of analysts that can
perform effective intelligence, surveillance and reconnaissance (ISR) on a “do it yourself” (DIY) or
“armchair” approach (see D.L. Hall and J. Llinas (2014)). Key technology advances include: i) new
sensing capabilities including the use of micro-scale sensors and ad hoc deployment platforms such as
commercial drones, ii) advanced computing capabilities in mobile devices that allow advanced signal and
image processing and modeling, iii) intelligent interconnections due to advances in “web N” capabilities,
and iv) global interconnectivity and increasing bandwidth. In addition, the changing habits of the digital
natives reflect new ways of collecting and reporting information, sharing information, and collaborating
in dynamic teams. This paper provides a survey and assessment of tools and resources to support this
emerging analysis approach. The tools range from large-scale commercial tools such as IBM i2 Analyst
Notebook, Palantir, and GeoSuite to emerging open source tools such as GeoViz and DECIDE from
university research centers. The tools include geospatial visualization tools, social network analysis
tools and decision aids. A summary of tools is provided along with links to web sites for tool access.
Utilization of human participants as "soft sensors" is becoming increasingly important for gathering information related
to a wide range of phenomena including natural and man-made disasters, environmental changes over time, crime
prevention, and other roles of the "citizen scientist." The ubiquity of advanced mobile devices is facilitating the role of
humans as "hybrid sensor platforms", allowing them to gather data (e.g. video, still images, GPS coordinates), annotate
it based on their intuitive human understanding, and upload it using existing infrastructure and social networks.
However, this new paradigm presents many challenges related to source characterization, effective tasking, and
utilization of massive quantities of physical sensor, human-based, and hybrid hard/soft data in a manner that facilitates
decision making instead of simply amplifying information overload.
In the Joint Directors of Laboratories (JDL) data fusion process model, "level 4" fusion is a meta-process that attempts
to improve performance of the entire fusion system through effective source utilization. While there are well-defined
approaches for tasking and categorizing physical sensors, these methods fall short when attempting to effectively utilize
a hybrid group of physical sensors and human observers. While physical sensor characterization can rely on statistical
models of performance (e.g. accuracy, reliability, specificity, etc.) under given conditions, "soft" sensors add the
additional challenges of characterizing human performance, tasking without inducing bias, and effectively balancing
strengths and weaknesses of both human and physical sensors. This paper addresses the challenges of the evolving
human-centric fusion paradigm and presents cognitive, perceptual, and other human factors that help to understand,
categorize, and augment the roles and capabilities of humans as observers in hybrid systems.
Information fusion is becoming increasingly human-centric. While past systems typically relegated humans to the role of
analyzing a finished fusion product, current systems are exploring the role of humans as integral elements in a modular
and extensible distributed framework where many tasks can be accomplished by either human or machine performers.
For example, "participatory sensing" campaigns give humans the role of "soft sensors" by uploading their direct
observations or as "soft sensor platforms" by using mobile devices to record human-annotated, GPS-encoded high
quality photographs, video, or audio. Additionally, the role of "human-in-the-loop", in which individuals or teams using
advanced human computer interface (HCI) tools such as stereoscopic 3D visualization, haptic interfaces, or aural
"sonification" interfaces can help to effectively engage the innate human capability to perform pattern matching,
anomaly identification, and semantic-based contextual reasoning to interpret an evolving situation.
The Pennsylvania State University is participating in a Multi-disciplinary University Research Initiative (MURI)
program funded by the U.S. Army Research Office to investigate fusion of hard and soft data in counterinsurgency
(COIN) situations. In addition to the importance of this research for Intelligence Preparation of the Battlefield (IPB),
many of the same challenges and techniques apply to health and medical informatics, crisis management, crowd-sourced
"citizen science", and monitoring environmental concerns. One of the key challenges that we have encountered is the
development of data formats, protocols, and methodologies to establish an information architecture and framework for
the effective capture, representation, transmission, and storage of the vastly heterogeneous data and accompanying
metadata -- including capabilities and characteristics of human observers, uncertainty of human observations, "soft"
contextual data, and information pedigree. This paper describes our findings and offers insights into the role of data
representation in hard/soft fusion.
There is an emerging demand for the development of data fusion techniques and algorithms that are capable of
combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data
including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that
includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly
heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other
asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and
evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion
applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor
data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span
the soft and hard data sets.
This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and
hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by
the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources,
corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical
grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other
A current trend in information fusion involves distributed methods of combining both conventional "hard" sensor data
and human-based "soft" information in a manner that exploits the most useful and accurate capabilities of each
modality. In addition, new and evolving technologies such as Flash LIDAR have greatly enhanced the ability of a single
device to rapidly sense attributes of a scene in ways that were not previously possible.
At the Pennsylvania State University we are participating in a multi-disciplinary university research initiative (MURI)
program funded by the U.S. Army Research Office to investigate issues related to fusing hard and soft data in
counterinsurgency (COIN) situations. We are developing level 0 and level 1 methods (using the Joint Directors of
Laboratories (JDL) data fusion process model) for fusion of physical ("hard") sensor data. Techniques include methods
for data alignment, tracking, recognition, and identification for a sensor suite that includes LIDAR, multi-camera
systems, and acoustic sensors. The goal is to develop methods that dovetail on-going research in soft sensor
processing. This paper describes various hard sensor processing algorithms and their evolving roles and
implementations within a distributed hard and soft information fusion system.