We describe a system architecture aimed at supporting Intelligence, Surveillance, and Reconnaissance (ISR) activities in a Company Intelligence Support Team (CoIST) using natural language-based knowledge representation and reasoning, and semantic matching of mission tasks to ISR assets. We illustrate an application of the architecture using a High Value Target (HVT) surveillance scenario which demonstrates semi-automated matching and assignment of appropriate ISR assets based on information coming in from existing sensors and human patrols operating in an area of interest and encountering a potential HVT vehicle. We highlight a number of key components of the system but focus mainly on the human/machine conversational interaction involving soldiers on the field providing input in natural language via spoken voice to a mobile device, which is then processed to machine-processable Controlled Natural Language (CNL) and confirmed with the soldier. The system also supports CoIST analysts obtaining real-time situation awareness on the unfolding events through fused CNL information via tools available at the Command and Control (C2). The system demonstrates various modes of operation including: automatic task assignment following inference of new high-importance information, as well as semi-automatic processing, providing the CoIST analyst with situation awareness information relevant to the area of operation.
Recent developments in sensing technologies, mobile devices and context-aware user interfaces have made it pos-
sible to represent information fusion and situational awareness for Intelligence, Surveillance and Reconnaissance
(ISR) activities as a conversational process among actors at or near the tactical edges of a network. Motivated by
use cases in the domain of Company Intelligence Support Team (CoIST) tasks, this paper presents an approach
to information collection, fusion and sense-making based on the use of natural language (NL) and controlled nat-
ural language (CNL) to support richer forms of human-machine interaction. The approach uses a conversational
protocol to facilitate a
ow of collaborative messages from NL to CNL and back again in support of interactions
such as: turning eyewitness reports from human observers into actionable information (from both soldier and
civilian sources); fusing information from humans and physical sensors (with associated quality metadata); and
assisting human analysts to make the best use of available sensing assets in an area of interest (governed by man-
agement and security policies). CNL is used as a common formal knowledge representation for both machine
and human agents to support reasoning, semantic information fusion and generation of rationale for inferences,
in ways that remain transparent to human users. Examples are provided of various alternative styles for user
feedback, including NL, CNL and graphical feedback. A pilot experiment with human subjects shows that a
prototype conversational agent is able to gather usable CNL information from untrained human subjects.
There is considerable interest in natural language conversational interfaces. These allow for complex user interactions with systems, such as fulfilling information requirements in dynamic environments, without requiring extensive training or a technical background (e.g. in formal query languages or schemas). To leverage the advantages of conversational interactions we propose CE-SAM (Controlled English Sensor Assignment to Missions), a system that guides users through refining and satisfying their information needs in the context of Intelligence, Surveillance, and Reconnaissance (ISR) operations. The rapidly-increasing availability of sensing assets and other information sources poses substantial challenges to effective ISR resource management. In a coalition context, the problem is even more complex, because assets may be owned" by different partners. We show how CE-SAM allows a user to refine and relate their ISR information needs to pre-existing concepts in an ISR knowledge base, via conversational interaction implemented on a tablet device. The knowledge base is represented using Controlled English (CE) - a form of controlled natural language that is both human-readable
and machine processable (i.e. can be used to implement automated reasoning). Users interact with the CE-SAM conversational interface using natural language, which the system converts to CE for feeding-back to the user for confirmation (e.g. to reduce misunderstanding). We show that this process not only allows users to access the assets that can support their mission needs, but also assists them in extending the CE knowledge base with new concepts.
In domains such as emergency response and military operations the sharing of Intelligence, Surveillance and Reconnaissance (ISR) assets among different coalition partners is regulated through policies. Traditionally, poli cies are created at the center of a coalitions network by high-level decision makers and expressed in low-level policy languages (e.g. Common Information Model SPL) by technical personnel, which makes them difficult to be understood by non-technical users at the edge of the network. Moreover, policies must often be modified by negotiation among coalition partners, typically in rapid response to the changing operational situation. Com monly, the users who must cope first with situational changes are those on the edge, so it would be very effective if they were able to create and negotiate policies themselves. We investigate the use of Controlled English (CE)
as a means to define a policy representation that is both human-friendly and machine processable. We show how a CE model can capture a variety of policy types, including those based on a traditional asset ownership model, and those defining team-based asset sharing across a coalition. The use of CE is intended to benefit coalition networks by bridging the gap between technical and non-technical users in terms of policy creation and negoti ation, while at the same time being directly processable by a policy-checking system without transformation to any other technical representation.