PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141301 (2020) https://doi.org/10.1117/12.2572617
This PDF file contains the front matter associated with SPIE Proceedings Volume 11413, including the Title Page, Copyright Information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Next Generation Sensor Systems and Applications Track Plenary Session
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141302 https://doi.org/10.1117/12.2564157
The future operational environment will be contested in all domains in an increasingly lethal and expanded battlefield, conducted in complex environments against challenged deterrence. In order to prevail in the Multi-Domain Operations (MDO) phases of dis-integration, exploitation, and re-entry to competition, the Army will need to employ teams of highly-dispersed warfighters and agents (robotic and software), to include Robotic Combat Vehicles (RCVs). To operate as a high-functioning team, Soldiers will need to be able to coordinate with RCVs as if they were teammates (i.e. fellow Soldiers) rather than tools (i.e. tele-operated robots capable of performing limited tasks). To enable this human-agent teamwork, the Artificial Intelligence for Maneuver and Mobility (AIMM) Essential Research Project (ERP) aims to revolutionize AI-enabled systems for autonomous maneuver that can rapidly learn, adapt, reason, and act in MDO. The program is divided into two main Lines of Effort (LoE): Mobility, and Context-Aware Decision Making (CADM). The Mobility LoE is focused on developing resilient autonomous off-road navigation for combat vehicles at operational speed that can autonomously move to a position of advantage. The CADM LoE is focused on enabling autonomous systems to reason about the environment for scene understanding with the ability to incorporate multiple sources of information and quantify uncertainty. Ultimately, the Mobility and CADM LoE's will culminate in autonomous maneuver-the ability of unmanned vehicles to autonomously maneuver on the ground against a near-peer adversary within the Multi-Domain Operations (MDO) battlespace. This capability will enable autonomous vehicles to team with Soldiers more seamlessly (reducing Soldier cognitive burden); conduct reconnaissance to develop the enemy situation at standoff (creating options for the commander); and enabling the next generation of combat vehicles to fight and win against a near-peer adversary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141303 https://doi.org/10.1117/12.2572445
11413 on AI & ML for MDO Applications II: Introduction
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141304 (2020) https://doi.org/10.1117/12.2557967
In a defense landscape driven by increasing automation, larger operation scale, higher op-tempo, and tighter integrations across multiple domains, how do emerging advances in computing technology empower future defense concepts and operations? The paper overviews the notion of a multi-domain operations (MDO) effect loop as an organizing principle for military operations and information-driven decision processes. It then highlights recent advances in artificial intelligence, information theory, distributed sensing, and network optimization that significantly enhance the capabilities of different loop components, as illustrated by notional defense scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141305 (2020) https://doi.org/10.1117/12.2559126
Artificial Intelligence’s (AI) potential to augment or auto-mate human decision making has caught the attention of many within the Department of Defense (DoD). Effective AI implementations, originating from well-scoped use cases, require the alignment of people, processes, and technologies. While many focus on the opportunities that these systems provide for leap ahead improvements to the future state of DoD business processes and warfighting capabilities, the challenges that must be overcome must not be under-estimated and expectations should be man-aged appropriately with an understanding that achieving the desired end state will be a long term endeavor. This paper provides a brief overview of some of the challenges that must be overcome in order to realize the opportunities presented by AI systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brent J. Lance, Gabriella B. Larkin, Jonathan O. Touryan, Joe T. Rexwinkle, Steven M. Gutstein, Stephen M. Gordon, Osben Toulson, John Choi, Ali Mahdi, et al.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141306 (2020) https://doi.org/10.1117/12.2564515
The application of Artificial Intelligence and Machine Learning (AI/ML) technologies to Aided Target Recognition (AiTR) systems will significantly improve target acquisition and engagement effectiveness. Although, the effectiveness of these AI/ML technologies is based on the quantity and quality of labeled training data, there is very limited labeled operational data available. Creating this data is both time-consuming and expensive, and AI/ML technologies can be brittle and unable to adapt to changing environmental conditions or adversary tactics that are not represented in the training data. As a result, continuous operational data collection and labeling are required to adapt and refine these algorithms, but collecting and labeling operational data carries potentially catastrophic risks if it requires Soldier interaction that degrades critical task performance. Addressing this problem to achieve robust, effective AI/ML for AiTR requires a multi-faceted approach integrating a variety of techniques such as generating synthetic data and using algorithms that learn on sparse and incomplete data. In particular, we argue that it is critical to leverage opportunistic sensing: obtaining operational data required to train and validate AI/ML algorithms from tasks the operator is already doing, without negatively affecting performance on those tasks or requiring any additional tasks to be performed. By leveraging the Soldier’s substantial skills, capabilities, and adaptability, it will be possible to develop effective and adaptive AI/ML technologies for AiTR in the future Multi- Domain Operations (MDO) battlefield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Alun D. Preece, Federico Cerutti, Dave Braines, Supriyo Chakraborty, Mani Srivastava, Tien Pham
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141307 https://doi.org/10.1117/12.2558571
Coalition situational understanding (CSU) is fundamental to support decision-making and autonomy in multi-domain operations involving multiple allied partners. Our work aims to advance the algorithms and techniques to develop CSU, addressing key scientific challenges in how different levels of representation, reasoning and machine learning (ML) interact with one another to facilitate flow of information and management of uncertainty between coalition agents and services. The very existence of a coalition is contingent on the premise that the whole is greater than the sum of the parts, i.e., the shared model of the environment - acquired using the information learned, combined and inferred from all the agents - is not only more complete but also more robust than local models. Specifically, two aspects of achieving this CSU vision are considered in this paper: (1) integrating learning and reasoning techniques for CSU, addressing the technical challenge of dealing with uncertainly in the
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141308 (2020) https://doi.org/10.1117/12.2559031
The paper highlights the need for methods of analytical science for multi-domain autonomy evaluation. Multidomain autonomous systems need to collect large amounts of data to verify, validate, test, and evaluate system operations. For multi-domain and uncertain scenarios, data sampling may not be adequate to fully explore and represent the entire trade space for verification and validation (V&V). However, leveraging methods from test and evaluation (T&E), a hierarchy of analytics can be developed so as to narrow the trade space. Issues in V&V/T&E employ statistics, but could benefit from first-principles physics theoretical analytics, data augmentation, and scenario design. The use of modeling is not new; however, as analytics of artificial intelligence and machine learning (AI/ML) are designed to exploit data; then there are opportunities to allow one domain (e.g., air) support data analytics in another domain (e.g., cyber).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141309 (2020) https://doi.org/10.1117/12.2559990
Multidomain sensor data processing and fusion provide reliable ways for situational awareness in multidomain operations and receive great attention in both industry and academia. However, these data processing and fusion are complicated in implementation due to various modalities and high complexity of sensor data. In network dynamics, graph theory is used to represent complex data and extract information, and graph evolution is applied to analyze network dynamics. In this paper, combining the technologies of these two different domains, we propose using network dynamics to process and fuse multidomain sensor data for multidomain operations. First, we propose a graph-theory based framework for multidomain sensor data processing and fusion. Then we apply this general framework to multidomain sensor data processing. Using one-dimensional radio frequency (RF) signal processing, two-dimensional image processing and three-dimensional light detection and ranging (LIDAR) data analytics as examples, we demonstrate that with the proposed method, the same architecture can be used to extract critical features for these three types of sensor data. Furthermore, experiments also show that the proposed method creates higher performance than traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130A (2020) https://doi.org/10.1117/12.2558727
In multi-domain operations, different domains get different modalities of input signals, and as a result end up training different models for the same decision-making task. The input modalities could be overlapping with each other, which leads to the situation that models created in one domain may be reusable partially for tasks being conducted in other domains. In order to share the knowledge embedded in different models trained independently in each individual domain, we propose the concept of hybrid policy-based ensembles, in which the heterogeneous models from different domains are combined into an ensemble whose operations are controlled by policies specifying which subset of the models ought to be used for an operation. We show how these policies can expressed based on properties of training datasets, and discuss the performance of these hybrid policy-based ensembles on a dataset used for training network intrusion detection models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Learning and Reasoning with Small Data Samples, Dirty Data, High Clutter, and Deception
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130B (2020) https://doi.org/10.1117/12.2560357
Successful performance of machine learning approaches for object classification requires training with data sets that are good representations of actual field data. Most open source image databases, while large in size, are not representative of the type of scenes encountered by Army ground missions. The CCDC Army Research Laboratory hosts datasets, some collected recently, and some a few years ago that focus on Army scenarios and are thus an appropriate source of training data for defense applications. This paper presents examples of several of these datasets along with conditions of their availability to external research collaborators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130C (2020) https://doi.org/10.1117/12.2557514
It is envisioned that significant improvements in medical capabilities may be required to meet formidable conditions expected in future military conflicts and global events such as the COVID-19 pandemic. Similar challenges may exist for large-scale humanitarian assistance missions and civilian mass casualty events that do not conform to prior assumptions for care delivery including evacuation within the golden hour and availability of large medical footprints in non-traditional and field settings. The importance of standardization and foundational infrastructure for medical devices, sensors, and data management is presented in order to achieve safe, and effective medical systems that deliver dramatic advances in functionality made possible by Artificial Intelligence and Machine Learning (AI/ML). The concept of autonomous, artificial intelligence based learning systems for medical support in military Multi-Domain Operations (MDO) to meet evolving demands is presented. Drivers towards greater use of Artificial Intelligence (AI) and Medical Autonomy to solve anticipated gaps in forward resuscitative and stabilization care, as well as associated relevance and implications for the management of civilian disasters are introduced. Finally, the central role of application architecture and robust technology frameworks necessary to advance the state of the science are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130D (2020) https://doi.org/10.1117/12.2557543
Building upon the possibilities of technologies like big data analytics, representational models, machine learning, semantic reasoning and augmented intelligence, our work presented in this paper, which has been performed within the collaborative research project MAGNETO (Technologies for prevention, investigation, and mitigation in the context of the fight against crime and terrorism), co-funded by the European Commission within Horizon 2020 programme, is going to support Law Enforcement Agencies (LEAs) in their critical need to exploit all available resources, and handling the large amount of diversified media modalities to effectively carry out criminal investigation. The paper at hand focuses at the application of machine learning solutions and reasoning tools, even with only small data samples. Due to the fact that the MAGNETO tools have to operate on highly sensitive data from criminal investigations, the data samples provided to the tool developers have been small, scarce, and often not correlated. The project team had to overcome these drawbacks. The developed reasoning tools are based on the MAGNETO ontology and knowledge base and enables LEA officers to uncover derived facts that are not expressed in the knowledge base explicitly, as well as discover new knowledge of relations between different objects and items of data. Two reasoning tools have been implemented, a probabilistic reasoning tool based on Markov Logic Networks and a logical reasoning tool. The design of the tools and their interfaces will be presented, as well as the results provided by the tools, when applied to operational use cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130E (2020) https://doi.org/10.1117/12.2552796
Recovering data from high amounts of loss and corruption would be useful for a wide variety of civilian and military applications. Highly corrupted data (e.g., speech and images) has been less studied relative to the problem of light corruption, but would be advantageous for applications such as low-light imagery and weak signal reception in acoustic sensing and radio communication. Unlike milder signal corruptions, resolving strong noise interference may require a more robust approach than simply removing predictable noise, namely actively looking for the expected signal, a type of problem well suited for machine learning. In this work, we evaluate a variant of the U-net autoencoder neural network topology for accomplishing the difficult task of denoising highly corrupted images and English speech when noise floors are 2-10x stronger than the clean signal. We test our methods on corruptions including additive white Gaussian noise and channel dropout.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130F (2020) https://doi.org/10.1117/12.2556819
In Defence and Security, we are often interested in rare events and occurrences where we only have a few examples. This presents a problem for traditional machine learning approaches which typically require thousands of examples per class to learn an effective classifier. Few-shot learning techniques look to use tasks comprising of a small set of labelled images and a novel, unlabelled example, to classify the novel example. These are typically considered as N-way k-shot problems, where we have N distinct classes, with only k labelled examples per class. At the Defence Science and Technology Laboratory (Dstl) in the UK, we are looking to understand the application of few-shot learning techniques to Defence and Security problems, particularly on imagery datasets. In this paper we discuss the application of few-shot learning approaches from literature to Defence and Security problems and discuss meta-learning, one of the key types of approaches to few-shot learning. We also present experimentation on meta-learning models, baselined against a transfer learned ResNet, across a range of few-shot tasks with differing data proportions on the miniImageNet and Caltech-USCD Birds datasets. This experimentation looks to improve our understanding of the behaviour of these few-shot models on a range of data limited problems. We identify a number of challenges that require further research before few-shot approaches can be effectively applied to Defence and Security problems and further research that could increase the range of Defence and Security problems to which few-shot approaches could be applied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Al-enabled Situation Awareness and Context Aware Decision Making
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130G (2020) https://doi.org/10.1117/12.2558804
One of the key factors affecting any multi-domain operation concerns the influence of unorganized militias, which may often counter a more advanced adversary by means of terrorist incidents. In order to ensure the achievement of strategic objectives, the actions and influence of such violent activities need to be taken into account. However, in many cases, full information about the incidents that may have affected civilians and non-government organizations is hard to determine. In the situation of asymmetric warfare, or when planning a multi-domain operation, often the identity of the perpetrator may not themselves be known. In order to support a coalition commander's mandate, one could use AI/ML techniques to provide the missing details about incidents in the field which may only be partially understood or analyzed. In this paper, we examine the goal of predicting the identity of the perpetrator of a terrorist incident using AI/ML techniques on historical data, and discuss how well the AI/ML models can work to help clean the data available to the commander for data analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130H (2020) https://doi.org/10.1117/12.2560494
During the Course of Action (COA) Analysis stage of the Military Decision Making Process (MDMP), staff members wargame the options of both friendly and enemy forces in an action-reaction-counteraction cycle to expose and address potential issues. This is currently a manual, subjective process, so many assumptions often go untested and only a very small number of alternative COAs may be considered. The final COA that is produced might miss opportunities or overlook risks. This challenge will only be exacerbated during Multi-Domain Operations (MDO), in which larger numbers of entities are expected to coordinate across domains to achieve converged effects within compressed timelines. This paper describes a prototype wargaming software support tool that leverages Artificial Intelligence (AI) to recommend COA improvements to commanders and staff. The tool’s design accounts for operational realities including a lack of available AI training data, limited tactical computing resources, and a need for end user interaction throughout the COA Analysis process. Given initial COAs for friendly and enemy forces, the tool searches for improvements by repeatedly proposing changes to the friendly COA and running the Data Analysis and Visualization INfrastructure for C4ISR (DAVINCI) combat simulation to evaluate them. Runtime is managed by carefully restricting the search space of the AI to only consider doctrinally relevant changes to the COA. The system architecture is designed to separate the AI, the simulation, and the user interface, simplifying continued experimentation and enhancements. The design of the AIenabled wargaming tool is presented along with initial results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130I (2020) https://doi.org/10.1117/12.2558889
Different approaches have been proposed in the literature for anomaly detection in image/video area. Traditional methods such as trajectory or spatio-temporal based techniques rely on hand-crafted features. The occlusion problems and high complexity in crowded scenes are the vital drawbacks behind using these methodologies. Deep learning structures proved recently to be useful for defining effective solutions for anomaly detection where the high level features are learnt and selected automatically. However, the block-wise methods such as CNNs are computationally slow. On the other hand, they are totally supervised learning methodologies, while the video-based anomaly detection is an unsupervised problem. Auto- Encoders (Convolutional AE, vibrational AE, etc.) can be considered as an alternative option. This paper presents a stateof- the-art deep learning algorithm to be applied in such an unsupervised problem. Using the basic concepts behind the Auto-Encoders as a well-known unsupervised learning algorithm, we propose a novel methodology to detect and localize the anomalies in a video scene. The presented network is trained based on the normal patterns during training phase. The proposed structure enables the system to capture the 2D structure in image sequences during the learning process. The working hypothesis is that a deep network is able to learn normal events in videos, and, therefore, the difference of normal and anomalous frames can be used for devising an anomaly score. The simulation results on well-known data sets such as UCSD confirm that the proposed methodology achieves high performance results in case of accuracy and total processing time compared with counterpart approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130J (2020) https://doi.org/10.1117/12.2556746
Situational awareness (SA) was defined as perception of environmental elements within the situation, comprehension, and projection of future status. Here the perception is the representation of sensory information that includes multi-typed interacting objects forming knowledge graphs." A typical problem in artificial intelligence (AI) research is to learn representations of objects that preserve structural information in knowledge graphs (KGs). Existing methods assume an AI agent has a complete knowledge graph, and any kind of prediction can be made accurately by a single AI. However, the real world needs multiple AI agents (e.g., warfighters, citizens) to collectively make a prediction. Each AI has a different, incomplete view of the knowledge graph with noise. In this work, I present a novel approach to improve representation learning and thus to improve SA with collective AI over KGs. I present the approach in four parts. First, I introduce knowledge graph and its nature of heterogeneity with multiple examples in the real world. Second, I discuss four ideas of making prediction with collective AI: prediction ensemble, data aggregation, representation aggregation, and joint representation learning. Third, I describe two state-of-the-art models to learn object representations from heterogeneous graphs: one is path- based embedding and the other is a graph neural network (GNN). Lastly, I present a new GNN framework that jointly learns object representations from multiple agents. Experimental results demonstrate that collective AI performs significantly better than individual AI. In future work, I discuss about federated learning that may improve security and privacy of the framework, which is quite necessary when any type of sharing (e.g., raw data, object representations, or learning process) is sensitive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130K (2020) https://doi.org/10.1117/12.2557518
The need for immediate situational awareness updates in a military environment can be partially mitigated by employing machine learning (ML) at the edge of the network, where the warfighter operates. Technical challenges for edge computing, like limited power and data, require unique hardware and software implementations for viable solutions. Low power neuromorphic processors running radial basis function artificial neural networks (RBFNN) makes ML at the edge more practical but can introduce limitations in the data throughput. This power and data limitation can be moderated using preprocessing of the input space to magnify the most pertinent data features. This paper presents a framework for evaluating different input space paradigms in a systematic manner. Using a representative small dataset for a pyroshock event, common in the military environment, several input preprocessing paradigms are evaluated. The correlation coefficient across the dataset, between the number of neurons and inference accuracy for the RBFNN has a p-value of 1 x 10-7.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130L (2020) https://doi.org/10.1117/12.2558553
Recently, we introduced a state-of-the-art object detection approach referred to as Multi-Expert R-CNN (ME R-CNN) that featured multiple expert classifiers, each being responsible for recognizing objects with distinctive geometrical features. The ME R-CNN architecture consists of multiple components: a shared convolutional network, Multi-Expert classifiers (ME), and Expert Assignment Network (EAN). Both ME and EAN take as a common input the output from the convolutional network and also use each other's output during training. Thus, it is quite challenging to properly train all the components simultaneously to globally optimize the network parameters. The main innovation of the proposed work is to optimize the entire architecture by using a novel training strategy in which manually associated 'RoI-to-expert' mapping is used instead of using the direct output of ME for training EAN. Our experiments show that the proposed training strategy speeds up training time at least 4.2x while maintaining comparable object detection accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130N (2020) https://doi.org/10.1117/12.2559799
In order to scale for speed, technology often builds upon the earliest proven systems and architectures. As the context changes, from a civilian application domain to a military application domain, the priority of functional requirements can and often do change. The hardware, software, and language development environment set the foundation for the constraints and potential of a system. This along with the fact the information technology revolution, since early 2000, has primarily been driven by the commercial sector, requires engineers to consider whether nontraditional, less well-known architectures may have a role in the Multi-Domain Operations (MDO) application space. This paper will highlight features inherent to traditional architectures, the challenges associated with these architectural features, and how the Erlang VM represents an opportunity to develop an architectural foundation suitable to the MDO application domain. Finally, this paper will highlight a future technology concept integrating demonstrated neural interface technology with an Erlang VM supported architecture. This foundation will help enable human-machine teaming by empowering a human agent to interact with sensors and AI-enabled autonomous systems with a dynamic user interface allowing the human agent to accomplish MDO applications. The great potential for the concept depends on a fault-tolerant, distributed system permitted by the Erlang VM to exibly integrate the capabilities required to address the diverse challenges of a complex operating environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130O (2020) https://doi.org/10.1117/12.2558275
Vehicle detection in aerial imagery is still an open research challenge although it has received some breakthroughs in the computer vision research community. Most of the existing state-of-the-art vehicle detection algorithms have ignored to consider some major factors which may have a great influence on the detection task. The low-resolution characteristic of aerial images is considered one of the major factors. Although the super-resolution technique can resolve this problem which learns a mapping between the low-resolution (LR) images and their corresponding high-resolution (HR) counterparts, however, the problem still remains when detection needs to take place at night or in a dark environment. Therefore, RGB-based detection can be another vital problem specifically for the detection task in a dark environment. For such environment infrared (IR) imaging becomes necessary which again may not be available during training an IR detector. To address these challenges, we propose a joint cross-modal and super-resolution framework based on the Generative Adversarial Network (GAN) for vehicle detection in aerial images. Our proposed joint network consists of two deep sub-networks. The first sub-network utilizes the GAN architecture to generate super-resolved (SR) images across two different domains (cross-domain translation). The second sub-network performs detection on these cross-domain translated and super-resolved images using one of the state-of-the-art object detectors i.e., You Only Look Once version 3 (YOLOv3). To evaluate the efficacy of our proposed model, we conduct several experiments on a publicly available Vehicle Detection in Aerial Imagery (VEDAI) dataset. We further compare our proposed network with state-of-the-art image generation methods to show the adequacy of our model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130P (2020) https://doi.org/10.1117/12.2558255
In machine learning, backdoor or trojan attacks during model training can cause the targeted model to deceptively learn to misclassify in the presence of specific triggers. This mechanism of deception enables the attacker to exercise full control on when the model behavior becomes malicious through use of a trigger. In this paper, we introduce Epistemic Classifiers as a new category of defense mechanism and show their effectiveness in detecting backdoor attacks, which can be used to trigger default mechanisms, or solicit human intervention, on occasions where an untrustworthy model prediction can adversely impact the system within which it operates. We show experimental results with multiple public datasets and explain the reasons with visualization for effectiveness of the proposed approach. This empowers the war fighter to trust the AI on the tactical edge to be reliable and to become sensitive to scenarios with deception and noise where reliability cannot be provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130Q (2020) https://doi.org/10.1117/12.2558747
Adversarial machine learning is concerned with the study of vulnerabilities of machine learning techniques to adversarial attacks and potential defenses against such attacks. Intrinsic vulnerabilities, incongruous and often suboptimal defenses are both rooted in the standard assumption upon which machine learning methods have been developed. The assumption that data are independent and identically distributed (i.i.d) samples implies training data are representative of the general population. Thus, learning models that fit the training data accurately would perform well on the test data from the rest of the population. Violations of the i.i.d assumption characterize the challenges of detecting and defending against adversarial attacks. For an informed adversary, the most effective attack strategy is to transform malicious data so that they appear indistinguishable from legitimate data to the target model. Current development in adversarial machine learning suggests that the adversary can easily gain the upper hand on this arms race since the adversary only needs to make a local breakthrough against the stationary target while the target model struggles to extend its predictive power to the general population, including the corrupted data. The fundamental cause of stagnation in effective defense against adversarial attacks suggests developing a moving target defense for a machine learning model for greater robustness. We investigate the feasibility and effectiveness of employing randomization in creating moving target defense for deep neural network learning models. Randomness is introduced through randomizing the input and adding small random noise to the learned parameters. Extensive empirical study is performed, covering different attack strategies and defense/detection techniques against adversarial attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130R (2020) https://doi.org/10.1117/12.2559319
Cyber resilience usually refers to the ability of an entity to detect, respond to, and recover from cybersecurity attacks to the extent that the entity can continuously deliver the intended outcome despite their presence. This paper presents a method and system for providing cyber resilience by integrating autonomous adversary and defender agents, deep reinforcement learning, and graph thinking. Specifically, the proposed cyber resilience system first predicts the current and future adversary activities and then provides an automated critical asset protection and recovery by enabling agents to take appropriate reactive and pro-active actions for preventing and mitigating adversary activities. In particular, the automated cyber resilience system’s adversary agent makes it possible for cybersecurity adversary activities, patterns, and intentions to be identified and tracked more accurately and dynamically, based on the preprocessed cybersecurity measurements and observations. The automated system’s defender agent is designed to determine and execute cost-effective defensive actions against the adversary activities and intentions predicted by the adversary agent. The game of these adversary and defender agents employ deep reinforcement learning to play a zero-sum observations-aware stochastic game. The experiment results show that the agents perform their tasks efficiently, as the adversary agent is dynamically provided with the input data of infected asset predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130S (2020) https://doi.org/10.1117/12.2548798
To systematically understand the effects of vulnerabilities introduced by AI/ML-enabled Army Multi-domain Operations, we provide an overview of characterization of ML attacks with an emphasis on black-box vs. white-box attacks. We then study a system and attack model for Army MDO applications and services, and introduce the roles of stakeholders in this system. We show, in various attack scenarios and under different knowledges of the deployed system, how peer adversaries can employ deceptive techniques to defeat algorithms, and how the system should be designed to minimize the attacks. We demonstrate the feasibility of our approach in a cyber threat intelligence use case. We conclude with a path forward for design and policy recommendations for robust and secure deployment of AI/ML applications in Army MDO environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130T (2020) https://doi.org/10.1117/12.2557313
Contributions of network analyses and neuroscience for the design of a system of heterogeneous deployable sensors for multi-domain operations are explored. The work addresses configuration of lines of the communication to more effectively transfer information from the deployed remote sensors systems back to human decision-makers. These are our initial attempts to craft a framework to guide the creation of robotic swarm networks deployed to gather sensor data for the intelligence preparation of the battlefield. The work proposes that if the sensing swarm’s main function is to gather sensor information to relay back to analysts and decision makers, the best analogy is that of a biological nervous system. The swarm acts as a perceptual system, with drones as the “eyes” of the system and the analysts as the “brain.” Network science also offers vocabulary and concepts to understand parameters that can be thought to reflect characteristics and performance of the swarms of sensors. Using the program ORA (Carnegie Mellon University), a series of models with 44, 60, 200, and 250 entity agents was randomly generated in common network configurations (e.g., small world, coreperiphery). In addition, deliberately designed networks were created to reflect system redundancies and data fusion. These possible swarm communication configurations were compared on operationally relevant characteristics and predicted performance (e.g., bandwidth required, resilience). Substantial differences were observed in characteristics and predicted performance among the candidate configurations. These types of parameters could then be used to guide development of requirements and testing and evaluation for entities making up sensing drone swarms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130U (2020) https://doi.org/10.1117/12.2559474
With advances in machine learning, autonomous agents are increasingly able to navigate uncertain operational environments, as is the case within the multi-domain operations (MDO) paradigm. When teaming with humans, autonomous agents may flexibly switch between passive bystander and active executor depending on the task requirements and the actions being taken by partners (whether human or agent). In many tasks, it is possible that a well-trained agent's performance will exceed that of a human, in part because the agent's performance is less likely to degrade over time (e.g., due to fatigue). This potential difference in performance might lead to complacency, which is a state defined by over-trust in automated systems. This paper investigates the effects of complacency in human-agent teams, where agents and humans have the same capabilities in a simulated version of the predator-prey pursuit task. We compare subjective measures of the human's predisposition to complacency and trust using various scales, and we validate their beliefs by quantifying complacency through various metrics associated with the actions taken during the task with trained agents of varying reliability levels. By evaluating the effect of complacency on performance, we can attribute a degree of variation in human performance in this task to complacency. We can then account for an individual human's complacency measure to customize their agent teammates and human-in-the-loop requirements (either to minimize or compensate for the human's complacency) to optimize team performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130V (2020) https://doi.org/10.1117/12.2557573
This study explores two hypotheses about human-agent teaming: 1. Real-time coordination among a large set of autonomous robots can be achieved using predefined "plays" which define how to execute a task, and "audibles" which modify the play on the fly. 2. A spokesperson agent can serve as a representative for a group of robots, relaying information between the robots and human teammates. These hypotheses are tested in a simulated game environment: a human participant leads a search-and-rescue operation to evacuate a town threatened by an approaching wildfire, with the object of saving as many lives as possible. The participant communicates verbally with a virtual agent controlling a team of ten aerial robots and one ground vehicle, while observing a live map display with real-time location of the fire and identified survivors. Since full automation is not currently possible, two human controllers control the agent's speech and actions, and input parameters to the robots, which then operate autonomously until the parameters are changed. Designated plays include monitoring the spread of fire, searching for survivors, broadcasting warnings, guiding residents to safety, and sending the rescue vehicle. A successful evacuation of all the residents requires personal intervention in some cases (e.g., stubborn residents) while delegating other responsibilities to the spokesperson agent and robots, all in a rapidly changing scene. The study records the participants' verbal and nonverbal behavior in order to identify strategies people use when communicating with robotic swarms, and to collect data for eventual automation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130W (2020) https://doi.org/10.1117/12.2558667
Real-time exchange of information amongst a team of soldiers is critical to the success of their mission in a battle field environment. The soldiers may not have a direct Line-of-Sight (LoS) between them in places with geographical separations and obstacles. A team of robots can be readily used in these scenarios to act as relays and facilitate real-time exchange of information between the soldiers. If there is no direct LoS between a pair of soldiers, the robots can be strategically placed and moved to act as a communication gateway between the soldiers. This article addresses the problem of placing a minimum number of robots in the environment such that any pair of soldiers can exchange information between each other either through direct LoS or through a series of relay robots. This problem is challenging, even in known environments, as the shape of the obstacles could be non-convex. We first show that this optimization problem is closely related to several NP-Hard problems. We then present fast heuristics that can find good feasible solutions to the problem. The heuristics sample the environment for potential robot locations and solve a connected set-cover problem to find a subset of locations that can provide the desired connectivity. The performance of the proposed heuristics is tested on a large number of problem instances generated by varying the number and placement of soldiers, and the shapes of the obstacles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130X (2020) https://doi.org/10.1117/12.2559296
This paper describes a set of shared control technical tools for intelligent teaming of a human operator and a robotic manipulator, and provides experimental evaluation of those tools. These technical tools can provide the human operator with an intuitive operational interface to command a multiple degree-of-freedom robotic manipulator via a general-purpose game controller and provide intelligent assistance to the human operator with human intent prediction and shared control. Two cameras with vision-feedback mounted on the robot end-effector are utilized to capture visual information and identify the location of the target object through which potential automatic control input for making adjustment to the robot motion is generated. A method to predict operator’s intent by comparing human input and automatic control input is introduced and employed to generate a probability with which the two inputs are dynamically combined to command the robot through human-machine shared control. An experimental platform consisting of a six degree-of-freedom industrial robot with a gripper and cameras is employed; results are provided with a human operator commanding the robot manipulator to execute an object inspection and handling task with the assistance of the proposed intelligent teaming algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130Y (2020) https://doi.org/10.1117/12.2557823
This paper describes our current multi-agent reinforcement learning concepts to complement or replace classic operational planning techniques. A neural planner is used to generate many possible paths. Training of the neural planner is a onetime task using a physics-based model to create the training data. The outputs of the neural planner are achievable paths. The path intersections are represented as decision waypoint nodes in a graph. The graph is interpreted as a Markov Decision Process (MDP). The resulting MDP is much faster than non-discretized spaces to train multi-agent reinforcement algorithms because only high-level decision waypoints are considered. The technique is applicable to multiple domains including air, space, land, sea, and cyber-physical domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130Z (2020) https://doi.org/10.1117/12.2557978
In wireless sensor networks, sensors handle the aggregation of data from neighboring nodes to the base station, in addition to their primary sensing task. Networks can minimize energy usage by batching together multiple outbound packets at certain nodes over a data aggregation tree. Constructing optimal data aggregation trees is an NP-hard problem, thus requiring approximation methods for larger instances. In this paper, we propose a new Multifactorial Evolutionary Algorithm to solve multiple Data Aggregation Tree Problem with Minimum Energy Cost instances simultaneously. Our method utilizes a novel operator scheme for Edge-Set Tree Representation enabling the unification of search spaces between instances, which helps us to obtain better results than contemporary approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141310 (2020) https://doi.org/10.1117/12.2559220
Acoustic, seismic, radio-frequency, optical, and other types of signals in complex real-world environments are randomized by processes such as multipath reflections from buildings and hills, surface scattering from rough terrain, and volume scattering by turbulence and vegetation. Bayesian classifier methods have the ability to incorporate physically realistic distributions for the random signal variations caused by these processes, and thus enable quantitative assessments of the uncertainty in the target classifications. This paper formulates a Bayesian classifier for problems involving strongly scattered signals with partially correlated features, as would be appropriate for situations involving observations of multiple signal features (e.g., spectral bands) at multiple sensor locations. In this case, the appropriate formulation of the likelihood function is a complex Wishart distribution. We simulate the classifier performance for two- and three-target problems involving multiple spectral signal features, for cases involving moderate and strong correlations between the signal features. The results illustrate the challenges of performing reliable classification based on a small number of samples of a strongly scattered signal, particularly when the target features are similar in strength. When there exist strong correlations between the feature data, full Bayesian classifiers decisively outperform naïve classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141311 (2020) https://doi.org/10.1117/12.2557422
A study is performed to gauge the effectiveness of training a Machine Learning (ML) System for Automatic Modulation Classification (AMC) to accurately identify several diverse digital communication transmission types occurring across the High Frequency (HF) Radio Frequency (RF) spectrum. This study uniquely uses Software Defined Radio (SDR) Power Spectral Density (PSD) waterfall signatures to help classify nine common types of amateur radio digital communication modes. Such an approach provides an alternative to more traditional In-phase/Quadrature (IQ) methods which can require large training sets. LeNet and ResNet Convolutional Neural Network (CNN) models are examined. Training/validation sets sensitivities are examined through Monte Carlo methods. Additionally, performances are examined in terms of confusion matrices as a function of Signal-to-Noise Ratio (SNR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141312 (2020) https://doi.org/10.1117/12.2558650
Warfighters and commanders face tremendous challenges as they work to understand and act within a competitive environment characterized by accelerating complexity. Artificial Intelligence (AI) and Machine Learning (ML) represent tremendous potential for decision makers operating across five domains within the Multi-Domain Operations (MDO) future operating concept, especially with falling costs and increasing access to disruptive technologies enabling our adversaries. Legitimate concerns surrounding military application of AI/ML to automation also demand careful consideration. How do we get to a place where warfighters have the right tools to overcome complexity and rapidly adapt within tight decision cycles against a technologically savvy adversary? Meaningful progress requires imagining where exactly we wish to go and which traps and pitfalls might lead us astray. In this paper we introduce a framework to facilitate discussion and understanding about what a desirable machine enhanced future army looks like, emphasizing the critical importance of vision in achieving it. Given an array of MDO challenges, we highlight how AI and ML capabilities can improve human control to minimize mistakes and tragedy. Lastly, in pursuit of avoiding potentially tragic missteps, we highlight critical pitfalls of applying AI/ML in military operations given inherent uncertainty of operational environments and complex systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141313 (2020) https://doi.org/10.1117/12.2558686
As the U.S. Army prepares for future conflicts and multi-domain operations, the need for methods to rapidly and continuously characterize the land-sea interface during littoral entry is paramount to ensure maneuverability across these domains. In the maritime domain, nearshore bathymetry and surf-zone sandbars define water depth and wave behavior, which in-turn drive landing tactics and the feasibility and configuration of littoral operations. In the land domain, beach and dune topography define slopes and transit paths, which drive staging area locations and effect maneuverability of both troops and equipment. Accurately predicting surf-zone state and littoral morphology evolution requires synthesizing a range of complex non-linear physics that drive these changes. Using imagery of the littorals from unmanned aerial systems and physics-based models, the U.S. Army Engineer and Development Center has developed novel data assimilation approaches to estimate water depth, littoral conditions, and beach sub-aerial topography from wave kinematics and photogrammetric algorithms and quantify their corresponding uncertainties. To improve the usefulness (speed of the calculations) and accuracy (accounting for known errors related to optical transfer functions and nonlinear wave dynamics) of this technology during littoral operations, approaches to develop machine-learning based computational tools which can directly translate short-sequences of littoral imagery into surf-zone characterization in real time by substituting or augmenting computationally complex models are being investigated. To accomplish this, a photo-realistic, non-linear wave model, Celeris, is used to generate synthetic imagery of a range of surf-zone environments. This synthetic imagery is crucial to developing the data sets necessary to train deep neural networks to solve the non-linear depth inversion problem from observations of wave kinematics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141314 (2020) https://doi.org/10.1117/12.2561636
The Protection Systems Branch of Combat Capabilities Development Command Armaments Center develops and sustains Emergency Management systems focusing on Homeland Defense technologies and interoperability. Artificial Intelligence computing algorithms and methods and the intelligent fusion of multiple Emergency Management correlated data sources including social media and extremist forums, as well as criminal, government, and medical databases can be used as a decision aid in the identification, prevention, response, and recovery of subversive incidents. This research and application can ultimately provide law enforcement and Emergency Management personnel with predictive trends, which would feed decision-making during all phases of an emergency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141315 (2020) https://doi.org/10.1117/12.2558657
This study concerns a multi-UAV path planning problem in concert with a known ground vehicle path. The environment is given as a grid map, where each cell is either a target, an obstacle or contains nothing (neutral). Respectively, each cell has a reward that is either positive, negative, or zero. The team of UAVs has to visit as many targets as possible under a given time span or distance constraint, to maximize the collected reward. More formally, this problem is a generalization of the Orienteering Problem (OP) and is NP-Hard. In addition, the inclusion of obstacle avoidance and area coverage introduces additional complications that the current literature has not readily addressed. We propose using a greedy heuristic based on the A* algorithm, which involves three stages (selection, insertion, and post-processing) to solve this problem. A large scale problem instance is generated and the results are presented for different variations of our proposed algorithm. For large problems with thousand of nodes, our algorithm was able to provide a feasible solution to the proposed problem within few minutes of computation time on a standard laptop.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141316 (2020) https://doi.org/10.1117/12.2559995
Robots are ideal surrogates for performing tasks that are dull, dirty, and dangerous. To fully achieve this ideal, a robotic teammate should be able to autonomously perform human-level tasks in unstructured environments where we do not want humans to go. In this paper, we take a step toward realizing that vision by introducing the integration of state of the art advancements in intelligence, perception, and manipulation on the RoMan (Robotic Manipulation) platform. RoMan is comprised of two 7 degree of freedom (DoF) limbs connected to a 1 DoF torso and mounted on a tracked base. Multiple lidars are used for navigation, and a stereo depth camera visualizes point clouds for grasping. Each limb has a 6 DoF force-torque sensor at the wrist, with a dexterous 3-finger gripper on one limb and a stronger 4-finger claw-like hand on the other. Tasks begin with an operator specifying a mission type, a desired final destination for the robot, and a general region where the robot should look for grasps. All other portions of the task are completed autonomously. This includes navigation, object identification and pose estimation (if the object is known) via deep learning or perception through search, fine maneuvering, grasp planning via grasp library, arm motion planning, and manipulation planning (e.g. dragging if the object is deemed too heavy to freely lift). Finally, we present initial test results on two notional tasks: clearing a road of debris such as a heavy tree or a pile of unknown light debris, and opening a hinged container to retrieve a bag inside it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141317 (2020) https://doi.org/10.1117/12.2557539
In the context of cognitive vehicles, it is essential to have full awareness of the vehicle location to have a better insight of the operational environment and enhance the driver perception. Although the global navigation satellite system (GNSS) is commonly a practical solution for vehicle localization, it may suffer quality deterioration, accuracy decay and even track loss due to signal blockage and reflection off buildings and big structures. These limitations usually manifest in big cities with dense traffic and active roads which means losing the sense of location, even for a short time, might blur the cognitive system decision making process and jeopardize the safe driving of the vehicle. Consequently, cognitive vehicles should not count only on the GNSS solution for vehicle localization. In this work we establish that the cognitive vehicle location awareness can be achieved through the inner process of interaction with the surrounding environment and observing its static reference elements. This approach is inspired by the way the human brain can assess its position in a known environment by recognizing some landmarks and referential objects. Our proposed solution allows the cognitive vehicle to ascertain its location by interacting with its surroundings. we train a deep neural network to detect some objects of reference, create a prior knowledge of the vehicle environment and estimate the vehicle location by recognizing the objects detection pattern. Finally, the proposed solution will be endorsed by promising results from a real-world scenario, and further work will be proposed to improve the solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141318 (2020) https://doi.org/10.1117/12.2560699
Visual perception has become core technology in autonomous robotics to identify and localize objects of interest to ensure successful and safe task execution. As part of the recently concluded Robotics Collaborative Technology Alliance (RCTA) program, a collaborative research effort among government, academic, and industry partners, a vision acquisition and processing pipeline was developed and demonstrated to support manned-unmanned team ing for Army relevant applications. The perception pipeline provided accurate and cohesive situational awareness to support autonomous robot capabilities for maneuver in dynamic and unstructured environments, collaborative human-robot mission planning and execution, and mobile manipulation. Development of the pipeline involved a) collecting domain specific data, b) curating ground truth annotations, e.g., bounding boxes, keypoints, c) retraining deep networks to obtain updated object detection and pose estimation models, and d) deploying and testing the trained models on ground robots. We discuss the process of delivering this perception pipeline under limited time and resource constraints due to lack of a priori knowledge of the operational environment. We focus on experiments conducted to optimize the models despite using data that was noisy and exhibited sparse examples for some object classes. Additionally, we discuss our augmentation techniques used to enhance the data set given skewed class distributions. These efforts highlight some initial work that directly relates to learning and updating visual perception systems quickly in the field under sudden environment or mission changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Christopher Montez, Swaroop Darbha, Christopher Valicka, Andrea Staid
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141319 (2020) https://doi.org/10.1117/12.2558748
Routing problems for unmanned vehicles are frequently encountered in civilian and military applications and have been studied extensively as a result. A routing problem of interest consists of constructing a tour that maximizes the total information gained over the course of the tour. Herein, we consider a version where information gain is represented by classification confidence at points of interest visited in the tour. The information gained at each point of interest is modeled using the Kullback-Leibler divergence (also referred to as mutual information) where the probability of correctly classifying the point of interest is taken to be time-dependent. A mixed-integer program (MIP) is formulated to model this problem and two standard heuristics (a modified two-step greedy algorithm and a standard 2-OPT algorithm) are combined in an attempt to produce high quality solutions. We run simulations with various conditions for the nature of the information gain and position of the points of interest. We show that combining these two heuristics produce near-optimal solutions in nearly all of the trials for up to 10 points of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131B (2020) https://doi.org/10.1117/12.2557549
Continued advances in IoT technology have motivated new research on its applicability for military operations under the emerging Internet of Battlefield Things (IoBT). Research on IoBT has sought to assess viability of both military and commercial-off-the-shelf (COTS) IoT technology to augment and complement existing military sensing assets, as well as support next-generation Artificial Intelligence and Machine Learning (AI/ML) systems. Such IoBT systems are also expected to operate in uniquely challenging conditions, featuring potential presence of adversaries as well as degraded/compromised support infrastructure. Under these conditions, transparency for IoBT-based systems becomes a key design consideration in establishing their fitness-for-mission-usage. Towards supporting increased transparency into IoBT-based applications, novel methods to support explanation of system activities become necessary. This work centers on two objectives: First, identifying a generalized set of components for supporting IoBT systems, to be accounted for in future IoBT explanation models and interfaces; Second, identifying follow-on research challenges faced in developing explanation functionality for these IoBT components. Here, focus is given to three identified generalizable IoBT components: (1) Mission Planning Systems, which aid in defining mission objectives, providing support for identification of mission requirements, and supporting allocation of assets for mission use; (2) Network Dissemination Systems, which support information delivery across potentially degraded or bandwidth-constrained networks; (3) IoT Asset Review Systems, which support review of IoT asset capabilities under known environmental and mission conditions to determine their fitness for operational use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131C (2020) https://doi.org/10.1117/12.2558083
Deep neural networks (DNNs) have become the gold standard for solving challenging classification problems, especially given complex sensor inputs (e.g., images and video). While DNNs are powerful, they are also brittle, and their inner workings are not fully understood by humans, leading to their use as "black-box" models. DNNs often generalize poorly when provided new data sampled from slightly shifted distributions; DNNs are easily manipulated by adversarial examples; and the decision-making process of DNNs can be difficult for humans to interpret. To address these challenges, we propose integrating DNNs with external sources of semantic knowledge. Large quantities of meaningful, formalized knowledge are available in knowledge graphs and other databases, many of which are publicly obtainable. But at present, these sources are inaccessible to deep neural methods, which can only exploit patterns in the signals they are given to classify. In this work, we conduct experiments on the ADE20K dataset, using scene classification as an example task where combining DNNs with external knowledge graphs can result in more robust and explainable models. We align the atomic concepts present in ADE20K (i.e., objects) to WordNet, a hierarchically-organized lexical database. Using this knowledge graph, we expand the concept categories which can be identified in ADE20K and relate these concepts in a hierarchical manner. The neural architecture we present performs scene classification using these concepts, illuminating a path toward DNNs which can efficiently exploit high-level knowledge in place of excessive quantities of direct sensory input. We hypothesize and experimentally validate that incorporating background knowledge via an external knowledge graph into a deep learning-based model should improve the explainability and robustness of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131D https://doi.org/10.1117/12.2560056
With the proliferation of edge operations—especially across multiple domains—having the ability to configure and run mission-critical edge analytics is mandatory for the success of coalition missions. With the rise of cloud computing—especially hybrid clouds—small scale data centers are fast becoming a viable option for military data analysis problems. However, having fixed resource requirements hinders the task satisfiability in dynamic situation; this paper provides a flexible resource allocation mechanism in which the provider chooses the amount of resources allocated for a job. Novel auction mechanisms are deployed to increase the efficiency in the allocation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131F https://doi.org/10.1117/12.2556222
In this paper, a novel decentralized intelligent adaptive optimal strategy has been developed to solve the pursuit-evasion game for massive Multi-Agent Systems (MAS) under uncertain environment. Existing strategies for pursuit-evasion games are neither efficient nor practical for large population multi-agent system due to the notorious ``Curse of dimensionality" and communication limit while the agent population is large. To overcome these challenges, the emerging mean field game theory is adopted and further integrated with reinforcement learning to develop a novel decentralized intelligent adaptive strategy with a new type of adaptive dynamic programing architecture named the Actor-Critic-Mass (ACM). Through online approximating the solution of the coupled mean field equations, the developed strategy can obtain the optimal pursuit-evasion policy even for massive MAS under uncertain environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint Session with Conferences 11413 and 11425: AI/ML and Unmanned Systems
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131H (2020) https://doi.org/10.1117/12.2558212
In this paper, we investigate the obstacle avoidance and navigation problem in the robotic control area. For solving such a problem, we propose revised Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization algorithms with an improved reward shaping technique. We compare the performance between the original DDPG and PPO with the revised version of both on simulations with a real mobile robot and demonstrate that the proposed algorithms achieve better results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint Session with Conferences 11413 and 11426: AI/ML and XR
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131I (2020) https://doi.org/10.1117/12.2558105
Given that many readily available datasets consist of large amounts of unlabeled data,1 unsupervised learning methods are an important component of many data-driven applications. In many instances, ground-state truth labels may be unavailable or obtainable only at a costly expense. As a result, there is an acute need for the ability to understand and interpret unlabeled datasets as thoroughly as possible. In this article, we examine the effectiveness of learned deep embeddings via internal clustering metrics on a dataset comprised of unlabelled StarCraft 2 game replays. The results of this work indicate that the use of deep embeddings provides a promising basis for clustering and interpreting player behavior in complex game domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131J (2020) https://doi.org/10.1117/12.2556812
One aspect of the well-being of a military unit depends on its ability to reliably detect threats and properly prepare for them. While a given sensor mounted on a ground vehicle can adequately capture threats in some scenarios, its viewpoint can be quite limiting. A potential solution to these limitations is mounting the sensor onto an unmanned aerial vehicle (UAV) to provide a more holistic view of the scene. However, this new perspective creates challenges unique to it. Herein, we investigate the performance of an RGB sensor mounted onto a UAV for object detection and classification to enable advanced situational awareness for a manned/unmanned ground vehicle trailing the UAV. To do this, we perform transfer learning with state-of-the-art deep learning models, e.g., ResNet50, Inception-v3. While object detection with machine learning has been actively researched, even on remotely sensed imagery, most of it has been through the context of scene classification. Therefore, it is worthwhile to explore the implications of this new camera perspective on the performance of object detection. Performance is assessed via route-based cross-validation collected by the U.S. Army ERDC at a test site spanning multiple days.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131K (2020) https://doi.org/10.1117/12.2557609
Object detection and localization is an important problem in computer vision and remote sensing. While there have been several techniques presented and used in recent years, the You Only Look Once (YOLO) and derivative architectures have gained popularity due to their ability to perform real-time object localization as well as achieve remarkable detection scores in ground-based applications. Here, we present methods and results for performing maneuverability hazard detection and localization in low-altitude unmanned aerial systems (UAS) imagery. Imagery is captured over a variety of flight routes and altitudes, and then analyzed with modern deep learning techniques to discover objects such as civilian and military vehicles, barriers, and related hindrances to navigating cluttered semi-urban environments. We present our findings for the deep learning architectures under a variety of training and validation parameters that include pre-trained weights from benchmark public datasets, as well as training with a custom, mission-relevant dataset provided by U.S. Army ERDC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131L (2020) https://doi.org/10.1117/12.2557610
Semantic segmentation, the task of assigning a class label to each pixel within a given image, has applications in a wide variety of domains, ranging from medicine to self-driving vehicles. One successful deep neural network model that has been developed for semantic segmentation tasks is the U-Net architecture, a "U"-shaped neural network initially applied to segmentation of cell membranes in biomedical images. Additional variants of the U- Net have been developed within the research literature that incorporate new features such as residual layers and attention mechanisms. In this research, we evaluate various U-Net-based architectures on the task of segmenting the road and non-road in low-altitude UAS visible spectrum imagery. We show that these models can successfully extract the roads, detail a variety of performance metrics of the respective networks' segmentations, and show examples of successes and pending challenges using U.S. Army ERDC imagery collected from a variety of ight routes and altitudes in a complex environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131M (2020) https://doi.org/10.1117/12.2558737
This work deals with the problem of lateral control for Emergency Lane Change (ELC) maneuvers for a convoy of Autonomous and Connected Vehicles (ACVs). Typically, an ELC maneuver is triggered by emergency cues from the front or the end of convoy. From a safety viewpoint, connectivity of vehicles is essential for obtaining preview information of preceding vehicles; every following vehicle could additionally get the position information of its preceding vehicle or vehicles for controller synthesis. In this work, we propose a method to synthesize a lateral controller by using preview GPS data from the lead and immediately preceding vehicles to construct a target trajectory for the ego vehicle to track. We then compute the feedforward control and feedback error signals based on the target trajectory. Numerical and experimental results corroborate the effectiveness of this scheme in terms of suppressing lateral string instabilities, thereby preventing the amplification of crosstrack errors as the vehicles in a convoy execute an ELC maneuver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131N (2020) https://doi.org/10.1117/12.2559482
When considering collaboration among agents in multi-agent systems, individual and team measures of performance are used to describe the collaboration. Typically, the definition of collaboration is limited in that it is only indicative of coordination required for a small class of tasks wherein this coordination is necessary for task completion (e.g. two or more agents needed to lift a heavy object). In this work, we aim to present a method that may be used to classify individual and group behaviors, enabling the measurement of collaboration among agents. We demonstrate the capability to use performance and behavioral data from computational learning agents in a predator-prey pursuit task to produce ergodic spatial distributions. Ergodicity is shown quantitatively and used to benchmark performance. The ergodic distributions shown, reflect the learned policies developed through multi-agent reinforcement learning (MARL). We also demonstrate that independently trained models produce distinctly different behavior, as revealed through ergodic spatial distributions. The ergodicity of the agents’ behavior provides both a potential path for classifying group behavior, predicting performance of group behavior with novel partners, and a quantifiable measure of collaboration built from explicitly aligned goals (i.e., cooperation) as a result of behavioral interdependencies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131O (2020) https://doi.org/10.1117/12.2558475
Analyzing sequential user behaviors plays an important role to build an effective recommender system and it has been paid a great deal of attention by researchers. Previous work exploits two types of sequential behaviors of users: Item sequence (each user interacts with items in order) and sequential interactions on an item (e.g. clicking an item, then adding it to cart, finally purchasing it). While a vast number of studies focus on modeling item sequence, a few works exploit sequential interactions on an item in recent years. However, there is no work that focuses on both of them. In our work, we propose a novel model which directly models both the types to capture user behaviors completely. Our model can combine multiple types of behaviors as a sequence of actions, moreover, it can model users' preferences through time with the sequence items which they have interacted in the past. The intensively experimental results show that our model significantly outperforms the effective baselines which are designed to learn from either item sequence or sequential user interactions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131Q (2020) https://doi.org/10.1117/12.2557526
Video analysis of pyrotechnics, or any event, from a high speed camera to obtain velocity data can be a tedious task, even with the help of most traditional software. The video has to be calibrated, the object of interest has to be identified, and the event of interest has to be monitored and analyzed. Even with an experienced user, one data point could take several minutes to obtain and may vary each time the same sample is analyzed. With the help of machine learning, a trained model could accomplish the same task, with identical results, in just seconds. The TensorFlow Object Detection API is an open source framework built on top of TensorFlow that makes it easy to construct, train and deploy object detection models.1 With some additional libraries, such as OpenCV, Pandas, Numpy, and Matplotlib a powerful object tracking, velocity calculating, and path visualization tool could be developed to break the monotony of software assisted, semiautomatic analysis and stride towards full autonomy, freeing up valuable engineering time and providing instant key performance attributes. For the first iteration of this application an object detection model was trained on a very small, annotated data set of pyrotechnics, additional scripts were written to extract velocity data and project a flight path, and the results were compared against the current processing technique. There was a significant decrease in processing time and a minuscule percent difference in the variation of data points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131R (2020) https://doi.org/10.1117/12.2558600
Sensors used in intelligence, surveillance and reconnaissance (ISR) operations and activities have the ability to generate vast amounts of data. High-volume analytical capabilities are needed to process data from multi-modal sensors to develop and test complex computational and deep learning models in support of the U.S. Army Multi-Domain Operations (MDO). The Army Research Laboratory designs, develops and tests Artificial Intelligence and Machine Learning (AI/ML) algorithms employing large repositories of in-house data. To efficiently process the data as well as design, build, train and deploy models, parallel and distributed algorithms are needed. Deep learning frameworks provide language-specific, container-based building blocks associated with deep learning neural networks applied to specific target applications. This paper discusses applications of AI/ML deep learning frameworks and Software Development Kits (SDKs) and demonstrates and compares specific multi-core processor and NVidia Graphics Processing Unit (GPU) implementations for desktop and Cloud environments. Frameworks selected for this research include PyTorch and Matlab. Amazon Web Services (AWS) SageMaker was used to launch Machine Learning instances ranging from general purpose computing to GPU instances. Detailed processes, example code, performance enhancements, best practices and lessons learned are included for publicly available acoustic and image datasets. Research results indicate parallel implementations of data preprocessing steps saved significant time but more expensive GPUs did not provide any processing time advantages for the machine learning algorithms tested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131S (2020) https://doi.org/10.1117/12.2558807
Network structure represents a vital component in wide-ranging aspects of Multi-Domain Operations (MDO). One specific type of network that holds promise in understanding the behavior of complex environments such as MDO consists of ones where nodes are combined with both positive ties and negative ties. Positive ties are edges that promote nodes to become similar to each other, or homophilous, while negative ties are edges that promote nodes to be dissimilar to each other. Such a model of influence among the nodes can be used to explain various phenomena happening within a society, modeling peer influences, spread of memes, or to model incidents of violence. In this paper, we propose a Positive-Negative tie network model to analyze terrorism incidents in India, and investigate the role of this network in general network classification and situation understanding contexts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131T (2020) https://doi.org/10.1117/12.2557891
Text mining for the identification of emerging technology is becoming increasingly important as the number of scientific and technical documents grows. However, algorithms for developing text mining models require a large amount of training data, which carries heavy costs associated with data annotation and model development. The need for avoiding these associated costs has in part motivated recent work in text mining, which indicate value in leveraging language representation models (LRMs) on domain-specific text corpora for domain-specific tasks. However, these results are demonstrated predominantly on large text corpora, which do not address concerns associated with the ability of LRMs to transfer to domains where training data may be scarce. Due to this, we benchmarked the performance of LRMs on identifying quantities and units of measure from text when the number of training samples is small.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131U (2020) https://doi.org/10.1117/12.2557850
Moving target defense (MTD) is an emerging defense principle that aims to dynamically change attack surface to confuse attackers. By dynamic reconfiguration, MTD intends to invalidate the attacker's intelligence or information collection during reconnaissance, resulting in wasted resources and high attack cost/complexity for the attacker. One of the key merits of MTD is its capability to offer 'affordable defense,' by working with legacy defense mechanisms, such as intrusion detection systems (IDS) or other cryptographic mechanisms. On the other hand, a well-known drawback of MTD is the additional overhead, such as reconfiguration cost and/or potential interruptions of service availability to normal users. In this work, we aim to develop a highly secure, resilient, and affordable MTD-based proactive defense mechanism, which achieves multiple objectives of minimizing system security vulnerabilities and defense cost while maximizing service availability. To this end, we propose a multi-agent Deep Reinforcement Learning (mDRL)-based network slicing technique that can help determine two key resource management decisions: (1) link bandwidth allocation to meet Quality-of-Service requirements and (2) the frequency of triggering IP shuffling as an MTD operation not to hinder service availability by maintaining normal system operations. Specifically, we apply this strategy in an in-vehicle network that uses software-defined networking (SDN) technology to deploy the IP shuffling-based MTD, which dynamically changes IP addresses assigned to electronic control unit (ECU) nodes to introduce uncertainty or confusion for attackers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131V (2020) https://doi.org/10.1117/12.2561855
The paper depicts a generic representation of a multi-segment war game leveraging machine intelligence with two opposing asymmetrical players. We show an innovative Event-Verb-Event (EVE) structure that is used to represent small pieces of knowledge, actions, and tactics. We show the war game paradigm and related machine intelligence techniques, including data mining, machine learning, and reasoning AI which have a natural linkage to causal learning, which can be applied for this game. We also show specifically a rule-based reinforcement learning algorithm, i.e., Soar-RL, which can modify, link, and combine a large collection EVE rules, which represent existing and new knowledge, to optimize the likelihood to win or lose a game in the end.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131W (2020) https://doi.org/10.1117/12.2563843
Many environments currently employ machine learning models for data processing and analytics that were built using a limited number of training data points. Once deployed, the models are exposed to significant amounts of previously-unseen data, not all of which is representative of the original, limited training data. However, updating these deployed models can be difficult due to logistical, bandwidth, time, hardware, and/or data sensitivity constraints. We propose a framework, Self-Updating Models with Error Remediation (SUMER), in which a deployed model updates itself as new data becomes available. SUMER uses techniques from semi-supervised learning and noise remediation to iteratively retrain a deployed model using intelligently-chosen predictions from the model as the labels for new training iterations. A key component of SUMER is the notion of error remediation as self-labeled data can be susceptible to the propagation of errors. We investigate the use of SUMER across various data sets and iterations. We find that self-updating models (SUMs) generally perform better than models that do not attempt to self-update when presented with additional previously-unseen data. This performance gap is accentuated in cases where there is only limited amounts of initial training data. We also find that the performance of SUMER is generally better than the performance of SUMs, demonstrating a benefit in applying error remediation. Consequently, SUMER can autonomously enhance the operational capabilities of existing data processing systems by intelligently updating models in dynamic environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131Y (2020) https://doi.org/10.1117/12.2556789
A genetic algorithm (GA) is an iterative procedure which performs several processes with the population individuals (chromosomes) to produce a new population, like in the biological evolution. To avoid the premature convergence, the paper proposes a self-adaptive algorithm, which adjusts parameters at the chromosome level and also at the population level, to solve a gender-based GA. Because the FPGA implementation of a self-adaptive GA requires more complicated logic units as for a conventional GA implementation, we propose to optimize this implementation by using a soft or hard processor embedded in the FPGA chip. Thus a part of the tasks will be solved by hardware blocks and a part of the tasks will be solved by the processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141320 (2020) https://doi.org/10.1117/12.2558640
The U.S. Army envisions fighting and winning future wars in congested and contested environments and multi-domain battles where revolutionary capabilities for the network-centric warfare (NCW) are essentially needed. NCW is characterized by the ability of geographically dispersed forces to attain a high level of shared battle-space awareness that can be exploited to achieve strategic, operational, and tactical objectives by autonomously linking people, platforms, weapons, sensors, and decision aids into a single network. Future battlefield networks will generate a massive volume of data, which can go beyond quantities. In a multi-domain battle, novel technologies for real-time decision-making, which is based on a large amount of heterogeneous as well as sparse, noisy, and ill-defined data under extremely uncertain environment, are specifically required. Additionally, humans have sometimes become completely comfortable with the information brought in by our sensing technologies. As a result, the command architecture, built on a massive web of information sources, becomes more receptive to potential catastrophic machine-human decision-making conflicts as well as vulnerable to incoming cyber threats including adversaries’ deception, interruption, and obscuration, which can eventually introduce own sources of decision-making failure. In this paper, researchers present validation results of a conceptualized artificial intelligence-based visual analytics framework. The researchers’ ultimate goal is to integrate the mature technology into the situation awareness technology for local commands and global logistics centers to enable an effective logistic command and control of aviation platforms and autonomous systems while being operated in an expeditionary multi-domain environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141321 (2020) https://doi.org/10.1117/12.2556422
This paper compares the effectiveness of two different skeletal pose models for a near real-time, multi-stage classifier. A cascaded neural-network (NN) classifier was previously developed to identify the level of threat posed by an armed person based on detected weapons and body posture. On an updated database of images containing armed individuals and groups, AlphaPose was used to calculate both MPII and COCO skeletons while OpenPose was used to calculate the COCO only. For comparison, we evaluated the importance of individual skeletal joints by systematically removing specific joints from the feature vector and retraining a reduced order network. On the database of images, the AlphaPose-COCO network was best able to correctly classify the threat presented by individuals, 83.7% on average, while AlphaPose-MPII registered 82.2% and 77.6% for OpenPose-COCO. As expected, the most important single joint in both skeleton models is the location of the pistol. As a guide for others deciding which skeleton to use for further studies, we conclude that neither skeleton significantly outperforms the other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 1141322 (2020) https://doi.org/10.1117/12.2558400
Fiber-Optic Distributed Acoustic Sensing (DAS) intrusion detection systems provide effective solutions for the border, critical infrastructure, facility, and pipeline security applications. DAS systems are able to detect and classify acoustic vibrations using standard telecommunication fibers buried under the ground or deployed over a fence. Activities of interest captured by the DAS system may not pose the same level of threat depending on the time and location of the activity. For instance, a ground digging activity during the day time in rural areas close to a pipeline is more likely an agricultural event rather than a suspicious illegal tapping on the pipeline. These everyday events can be misleading if the operator is notified with the same audio-visual alarms for both cases and may create frustration in the operator. Therefore assigning threat levels to the activities is an essential feature of the DAS systems to increase their credibility. In this paper, we propose a threat level assessment method that learns the activity density of the area in an unsupervised manner. Activities are scored using a threat metric and they are assigned levels using a novel dynamic thresholding approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.