We describe a system architecture aimed at supporting Intelligence, Surveillance, and Reconnaissance (ISR) activities in a Company Intelligence Support Team (CoIST) using natural language-based knowledge representation and reasoning, and semantic matching of mission tasks to ISR assets. We illustrate an application of the architecture using a High Value Target (HVT) surveillance scenario which demonstrates semi-automated matching and assignment of appropriate ISR assets based on information coming in from existing sensors and human patrols operating in an area of interest and encountering a potential HVT vehicle. We highlight a number of key components of the system but focus mainly on the human/machine conversational interaction involving soldiers on the field providing input in natural language via spoken voice to a mobile device, which is then processed to machine-processable Controlled Natural Language (CNL) and confirmed with the soldier. The system also supports CoIST analysts obtaining real-time situation awareness on the unfolding events through fused CNL information via tools available at the Command and Control (C2). The system demonstrates various modes of operation including: automatic task assignment following inference of new high-importance information, as well as semi-automatic processing, providing the CoIST analyst with situation awareness information relevant to the area of operation.
Recent developments in sensing technologies, mobile devices and context-aware user interfaces have made it pos-
sible to represent information fusion and situational awareness for Intelligence, Surveillance and Reconnaissance
(ISR) activities as a conversational process among actors at or near the tactical edges of a network. Motivated by
use cases in the domain of Company Intelligence Support Team (CoIST) tasks, this paper presents an approach
to information collection, fusion and sense-making based on the use of natural language (NL) and controlled nat-
ural language (CNL) to support richer forms of human-machine interaction. The approach uses a conversational
protocol to facilitate a
ow of collaborative messages from NL to CNL and back again in support of interactions
such as: turning eyewitness reports from human observers into actionable information (from both soldier and
civilian sources); fusing information from humans and physical sensors (with associated quality metadata); and
assisting human analysts to make the best use of available sensing assets in an area of interest (governed by man-
agement and security policies). CNL is used as a common formal knowledge representation for both machine
and human agents to support reasoning, semantic information fusion and generation of rationale for inferences,
in ways that remain transparent to human users. Examples are provided of various alternative styles for user
feedback, including NL, CNL and graphical feedback. A pilot experiment with human subjects shows that a
prototype conversational agent is able to gather usable CNL information from untrained human subjects.
There is considerable interest in natural language conversational interfaces. These allow for complex user interactions with systems, such as fulfilling information requirements in dynamic environments, without requiring extensive training or a technical background (e.g. in formal query languages or schemas). To leverage the advantages of conversational interactions we propose CE-SAM (Controlled English Sensor Assignment to Missions), a system that guides users through refining and satisfying their information needs in the context of Intelligence, Surveillance, and Reconnaissance (ISR) operations. The rapidly-increasing availability of sensing assets and other information sources poses substantial challenges to effective ISR resource management. In a coalition context, the problem is even more complex, because assets may be owned" by different partners. We show how CE-SAM allows a user to refine and relate their ISR information needs to pre-existing concepts in an ISR knowledge base, via conversational interaction implemented on a tablet device. The knowledge base is represented using Controlled English (CE) - a form of controlled natural language that is both human-readable
and machine processable (i.e. can be used to implement automated reasoning). Users interact with the CE-SAM conversational interface using natural language, which the system converts to CE for feeding-back to the user for confirmation (e.g. to reduce misunderstanding). We show that this process not only allows users to access the assets that can support their mission needs, but also assists them in extending the CE knowledge base with new concepts.
In this paper, we propose a system architecture for decision-making support on ISR (i.e., Intelligence, Surveillance, Reconnaissance) missions via optimizing resource allocation. We model a mission as a graph of tasks, each of which often requires exclusive access to some resources. Our system guides users through refinement of their needs through an interactive interface. To maximize the chances of executing new missions, the system searches for pre-existent information collected on the field that best fit the needs. If this search fails, a set of new requests representing users' requirements is considered to maximize the overall benefit constrained by limited resources. If an ISR request cannot be satisfied, feedback is generated to help the commander further refine or adjust their information requests in order to still provide support to the mission. In our work, we model both demands for resources and the importance of the information retrieved realistically in that they are not fully known at the time a mission is submitted and may change overtime during execution. The amount of resources consumed by a mission may not be deterministic; e.g., a mission may last slightly longer or shorter than expected, or more of a resource may be required to complete a task. Furthermore, the benefits received from the mission, which we call profits, may also be non-deterministic; e.g., successfully localizing a vehicle might be more important than expected for accomplishing the entire operation. Therefore, when satisfying ISR requirements we take into account both constraints on the underlying resources and uncertainty of demands and profits.
In domains such as emergency response and military operations the sharing of Intelligence, Surveillance and Reconnaissance (ISR) assets among different coalition partners is regulated through policies. Traditionally, poli cies are created at the center of a coalitions network by high-level decision makers and expressed in low-level policy languages (e.g. Common Information Model SPL) by technical personnel, which makes them difficult to be understood by non-technical users at the edge of the network. Moreover, policies must often be modified by negotiation among coalition partners, typically in rapid response to the changing operational situation. Com monly, the users who must cope first with situational changes are those on the edge, so it would be very effective if they were able to create and negotiate policies themselves. We investigate the use of Controlled English (CE)
as a means to define a policy representation that is both human-friendly and machine processable. We show how a CE model can capture a variety of policy types, including those based on a traditional asset ownership model, and those defining team-based asset sharing across a coalition. The use of CE is intended to benefit coalition networks by bridging the gap between technical and non-technical users in terms of policy creation and negoti ation, while at the same time being directly processable by a policy-checking system without transformation to any other technical representation.
We introduce an approach to representing intelligence, surveillance, and reconnaissance (ISR) tasks at a relatively
high level in controlled natural language. We demonstrate that this facilitates both human interpretation and
machine processing of tasks. More specically, it allows the automatic assignment of sensing assets to tasks,
and the informed sharing of tasks between collaborating users in a coalition environment. To enable automatic
matching of sensor types to tasks, we created a machine-processable knowledge representation based on the
Military Missions and Means Framework (MMF), and implemented a semantic reasoner to match task types
to sensor types. We combined this mechanism with a sensor-task assignment procedure based on a well-known
distributed protocol for resource allocation. In this paper, we re-formulate the MMF ontology in Controlled
English (CE), a type of controlled natural language designed to be readable by a native English speaker whilst
representing information in a structured, unambiguous form to facilitate machine processing. We show how CE
can be used to describe both ISR tasks (for example, detection, localization, or identication of particular kinds
of object) and sensing assets (for example, acoustic, visual, or seismic sensors, mounted on motes or unmanned
vehicles). We show how these representations enable an automatic sensor-task assignment process. Where a
group of users are cooperating in a coalition, we show how CE task summaries give users in the eld a high-level
picture of ISR coverage of an area of interest. This allows them to make ecient use of sensing resources by
In a military scenario, commanders need to determine what kinds of information will help them execute missions.
The amount of information available to support each mission is constrained by the availability of information
assets. For example, there may be limits on the numbers of sensors that can be deployed to cover a certain
area, and limits on the bandwidth available to collect data from those sensors for processing. Therefore, options
for satisfying information requirements should take into consideration constraints on the underlying information
assets, which in certain cases could simultaneously support multiple missions. In this paper, we propose a
system architecture for modeling missions and allocating information assets among them. We model a mission
as a graph of tasks with temporal and probabilistic relations. Each task requires some information provided by the
information assets. Our system suggests which information assets should be allocated among missions. Missions
are compatible with each other if their needs do not exceed the limits of the information assets; otherwise,
feedback is sent to the commander indicating information requirements need to be adjusted. The decision loop
will eventually converge and the utilization of the resources is maximized.
Broadcast scheduling has been extensively studied in wireless environments, where a base station broadcasts
data to multiple users. Due to the sole wireless channel's limited bandwidth, only a subset of the needs may be
satisfiable, and so maximizing total (weighted) throughput is a popular objective. In many realistic applications,
however, data are dependent or correlated in the sense that the joint utility of a set of items is not simply the
sum of their individual utilities. On the one hand, substitute data may provide overlapping information, so one
piece of data item may have lower value if a second data item has already been delivered; on the other hand,
complementary data are more valuable than the sum of their parts, if, for example, one data item is only useful
in the presence of a second data item.
In this paper, we define a data bundle to be a set of data items with possibly nonadditive joint utility, and we
study a resulting broadcast scheduling optimization problem whose objective is to maximize the utility provided
by the data delivered.
A sensor network in the field is usually required to support multiple sensing tasks or missions to be accomplished simultaneously. Since missions might compete for the exclusive usage of the same sensing resource we need to assign individual sensors to missions. Missions are usually characterized by an uncertain demand for sensing resource capabilities. In this paper we model this assignment problem by introducing the Sensor Utility Maximization (SUM) model, where each sensor-mission pair is associated with a utility offer. Moreover each mission is associated with a priority and with an uncertain utility demand. We also define the benefit or profit that a sensor can bring to a mission as the fraction of mission's demand that the sensor is able to satisfy, scaled by the priority of the mission. The goal is to find a sensor assignment that maximizes the total profit, while ensuring that the total utility cumulated by each mission does not exceed its uncertain demand. SUM is NP-Complete and is a special case of the well known Generalized Assignment Problem (GAP), which groups many knapsack-style problems. We compare four algorithms: two previous algorithms for problems related to SUM, an improved implementation of a state-of-the-art pre-existing approximation algorithm for GAP, and a new greedy algorithm. Simulation results show that our greedy algorithm appears to offer the best trade-off between quality of solution and computation cost.