In order to make sensible decisions during a multi domain battle, autonomous systems, just like humans, need to understand the current military context. They need to ‘know’ important mission context information such as, what is the commander’s intent and where are, and in what state, are friendly and adversary actors. They also need an understanding of the operating environment; the state of the physical systems ‘hosting’ the AI; and just as importantly, the state of the communication networks that allows each AI ‘node’ to receive and share critical information. The problem is: capturing, representing, and reasoning over this contextual information is especially challenging in distributed, dynamic, congested and contested multi domain battlespaces. This is not only due to rapidly changing contexts and noisy, incomplete and potentially erroneous data, but also because, at the tactical edge, we have limited computing, storage and battery resources. The US Army Research Laboratory, Australia’s Defence Science Technology Group and associated University partners are collaborating to develop an autonomous system called SMARTNet that can transform, prioritize and control the flow of information across distributed, intermittent and limited tactical networks. In order to do this however, SMARTNet requires a good understanding of the current military context. This paper describes how we are developing this contextual understanding using new AI and ML approaches. It then describes how we are integrating these approaches into an exemplar tactical network application that improves the distribution of information in complex operating environments. It concludes by summarizing our results to-date and by setting a way forward for future research.
The U.S Army Research Laboratory (ARL) has built a “Network Science Research Lab” to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
The U.S Army Research Laboratory (ARL) has built a “Wireless Emulation Lab” to support research in wireless mobile
networks. In our current experimentation environment, our researchers need the capability to run clusters of
heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating
system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have
root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not
sufficiently separate each user’s experiment due to undesirable network crosstalk, thus only one experiment could be run
at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these
concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages
existing open-source software to create private clusters of nodes that are either virtual or physical machines. These
clusters can be utilized for software development, experimentation, and integration with existing hardware and software.
The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The
system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent
experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine
to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it
maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root
privileges for the duration of the experiment. Users also control when to shutdown their clusters.
To improve the effectiveness of network-centric decision making, we present a distributed network application and
framework that provides users with actionable intelligence reports to support counter insurgency operations. ARL's
Quality of Information (QoI) Intelligence Report Application uses QoI metrics like timeliness, accuracy, and precision
combined with associated network performance data, such as throughput and latency, and mission-specific information
requirements to deliver high quality data to users; that is data delivered in a manner which best supports the ability to
make more informed decisions as it relates to the current mission. This application serves as a testing platform for
integrated experimentation and validation of QoI processing techniques and methodologies. In this paper, we present the
software-system framework and architecture, and show an example scenario that highlights how the framework aids in
network integration and enables better data-to-decision.
The diverse sensor types and networking technologies commonly used in fielded sensro networks provide a unique set of challenges [1] in the areas of sensor identification, interoperability, and sensor data consumability. The ITA Senor Fabric is a middleware infrastructure - developed as part of the International Technology Alliance (ITA)[2] in Network and Information Science - that addresses these challenges by providing unified access to, and management of, sensor networks. The Fabric spans the network from command and control, through forward operating bases, and out to mobile forces and fielded sensors, maximizing the availability and utility of intelligence information to users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.