PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10635, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Internet of Battlefield Things (IOBT) Applications
The paper discusses challenges in exploiting geotagged social media posts (such as Instagram images) for purposes of target (event) tracking. The argument for social media exploitation for tracking lies in that physical events, such as protests, acts of terror, or natural disasters elicit a response on social media in the neighborhood of the event. However, the density of social media posts is proportional to the local population density. Hence, inferred event locations based on the ensuing distribution of posts are skewed by disparities in population density around the true event location. The paper describes an unsupervised approach to neutralize the effect of uneven population density. Evaluation using Instagram footprints of recent events shows that the approach leads to a much more accurate estimation of real event trajectories.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to visualize sensor data, from either ground or airborne sensors, with respect to the associated 3-D terrain is powerful. Fusion3D is a software application for stereoscopic visualization of 3-D terrain data that was developed in the Image Processing Branch at the U.S. Army Research Laboratory. It uses a 3-D display, 3-D glasses, and a 3-D mouse to quickly view province-sized 3-D maps in stereo. It is capable of ingesting large 3-D datasets from a variety of sources and includes many useful features to aid a user in the exploitation of 3-D terrain data, such as features for providing route planning, mensuration, and line-of-sight analysis. Additionally, in a recent attempt to further improve situational awareness, Fusion3D was modified to support overlaid real-time data, from both ground and airborne sensors, onto a 3-D terrain map. We are calling the result a 3-D Sensor Common Operating Picture (3-D Sensor COP). Discovery of sensor location and data across coalition assets allows for greater diversity of sensor use and improved data and sensor interoperability. In this presentation, we will show ground and airborne data, collected at a recent exercise, overlaid on a 3-D terrain map of an urban environment. Using this data collection, we will describe how an analyst would use the sensor and terrain data to improve their understanding of the environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep Neural Networks (DNNs) have achieved near human and in some cases super human accuracies in tasks such as machine translation, image classification, speech processing and so on. However, despite their enormous success these models are often used as black-boxes with very little visibility into their working. This opacity of the models often presents hindrance towards the adoption of these models for mission-critical and human-machine hybrid networks.
In this paper, we will explore the role of influence functions towards opening up these black-box models and for providing interpretability of their output. Influence functions are used to characterize the impact of training data on the model parameters. We will use these functions to analytically understand how the parameters are adjusted during the model training phase to embed the information contained in the training dataset. In other words, influence functions allows us to capture the change in the model parameters due to the training data. We will then use these parameters to provide interpretability of the model output for test data points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on current trends in artificial intelligence (AI) and machine learning (ML), we provide an overview of novel algorithms intended to address Army-specific needs for increased operational tempo and autonomy for ground robots in unexplored, dynamic, cluttered, contested, and sparse data environments. This paper discusses some of the motivating factors behind US Army Research in AI and ML and provides a survey of a subset of the US Army Research Laboratory’s (ARL) Computational and Information Sciences Directorate’s (CISD) recent research in online, nonparametric learning that quickly adapts to variable underlying distributions in sparse exemplar environments, as well as a technique for unsupervised semantic scene labeling that continuously learns and adapts semantic models discovered within a data stream. We also look at a newly developed algorithm that leverages human input to help intelligent agents learn more rapidly and a novel research study working to discover foundational knowledge that is required for humans and robots to communicate via natural language. Finally we discuss a method for finding chains of reasoning with incomplete information using semantic vectors. The specific research exemplars provide approaches for overcoming the specific shortcomings of commercial AI and ML methods as well as the brittleness of current commercial techniques such that these methods can be enhanced and adapted so as to be applicable to army relevant scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Jacob P. Caswell, Kelsey L. Cairns, Christina L. Ting, Mark W. Hansberger, Matthew A. Stoebner, Thomas R. Brounstein, Christopher R. Cueller, Elizabeth R. Jurrus
Proceedings Volume Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX, 1063508 (2018) https://doi.org/10.1117/12.2307770
Thousands of facilities worldwide are engaged in biological research activities. One of DTRA’s missions is to fully understand the types of facilities involved in collecting, investigating, and storing biological materials. This characterization enables DTRA to increase situational awareness and identify potential partners focused on biodefense and biosecurity. As a result of this mission, DTRA created a database to identify biological facilities from publicly available, open-source information. This paper describes an on-going effort to automate data collection and entry of facilities into this database. To frame our analysis more concretely, we consider the following motivating question: How would a decision maker respond to a pathogen outbreak during the 2018 Winter Olympics in South Korea? To address this question, we aim to further characterize the existing South Korean facilities in DTRA’s database, and to identify new candidate facilities for entry, so that decision makers can identify local facilities properly equipped to assist and respond to an event. We employ text and social analytics on bibliometric data from South Korean facilities and a list of select pathogen agents to identify patterns and relationships within scientific publication graphs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in Machine Learning have been built on massive amounts of labelled data. However, data labeling is an expensive exercise. Oftentimes, for rare objects of interest, there is not enough data. We present novel techniques to reduce the amount of data that needs to be labelled, as well to prioritize the data labeling task. We show results of applying our techniques to computer vision. Another critical need is to be able to adapt these computer vision techniques for on-board processing. We present results for on-board distributed detection and tracking using a team of UAVs that collaborate for tracking objects of interest on the ground.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss the design considerations and challenges of using applied machine learning in complex systems, a necessity of operationalizing machine learning techniques. Although many applications of machine learning intend to discern key information insights from large collections of data, in realizable systems the quantity of insights may be so numerous that the insights remain as data and encumber a system and its users. New system design principles are emerging as a result of the dynamism of the machine learning community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides an abstract technology model called the AI Stack for the development and deployment of Artificial Intelligence, and the strategic investment in research, technology, and organizational resources required to achieve asymmetric capability. Over the past five years, there has been a drastic acceleration in the development of artificial intelligence fueled by exponential increases in computational power and machine learning. This has resulted in corporations, institutions, and nation-states vastly accelerating their investment in AI to (a) perceive and synthesize massive amounts of data, (b) understand the contextual importance of the data and potential tactical/strategic impacts, (c) accelerate and optimize decision-making, and (d) enable human augmentation and deploy autonomous systems. From a national security and defense perspective, AI is a crucial technology to enhance situational awareness and accelerate the realization of timely and actionable intelligence that can save lives. For many current defense applications, this often requires the processing of visual data, images, or full motion video from legacy platforms and sensors designed decades before recent advances in machine learning, computer vision, and AI. The AI Stack - and the fusion of the interdependent technology layers contained within it - provides a streamlined approach to visualize, plan, and prioritize strategic investments in commercial technologies and transformational research to leverage and continuously advance AI across operational domains, and achieve asymmetric capability through human augmentation and autonomous systems. One application of AI for the Department of Defense is to provide automation and human augmentation for analyzing full motion video to drastically enhance the safety of our deployed soldiers by enhancing their situational awareness and enabling them to make faster decisions on more timely information to save lives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US increasingly relies on surveillance video to determine when activities of interest occur in a surveilled location. The growth in video volume places a difficult burden on the analyst workforce charged with evaluating streaming video or performing forensic analysis on archived video. This paper presents a video summarization pipeline that attempts to reduce the volume of video analysts must watch by summarizing the video into shorter, presumably important clips. The pipeline incorporates object recognition and tracking to generate clips composed of bounding boxes for objects across time, segments these clips into unique trajectories, trains a stacked sparse autoencoder, then generates a summary based on reconstruction error within the autoencoder, where high error indicates a unique (relative to previous) object trajectory. The paper then compares performance of the summarization pipeline applied to research datasets to performance on more realistic DoD surveillance datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Awareness of mission environment and events is impeded by data heterogeneity and lack of integration among data sources across diverse domains. In this paper, we present C3O, an RDF/OWL-based cyber ontology which provides a representation of cyber assets and events, to which existing XML-based cyber models (STIX, CybOX) can be mapped. C3O is unique in that it is designed as an extension of Basic Formal Ontology (BFO) and the Common Core Ontologies (CCO), which renders it automatically interoperable with a host of existing BFO- and CCO-based domain ontologies for land, sea, air, planning, operations, and sensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designing, operating and maintaining any system in today’s complex Information Technology (IT) environments requires understanding and mitigating various levels of risk to the organization or to individuals associated with the operation of a system. How this risk is managed is critical to the organizations success. Multiple assets, technologies and skilled personnel are needed to select an appropriate set of security controls needed to protect individuals, operations and assets of the organization, and build or refine a security architecture based on best practices and risk management frameworks to adapt to ever changing threats. Developing secure applications and collaborating within a cloud-based environment adds both challenges and mitigations. Cloud environments are scalable and with the proper design and use can be secure and cost-effective. Special attention, processes and techniques associated with authentication and authorization require the formation and maintenance of users, groups, roles and policies. The U.S. Army Research Laboratory (ARL) provides America's Soldiers the technological edge through scientific research, technology development, and analysis. ARL provides scientific and technological innovation in a variety of technical disciplines, through direct in-house laboratory efforts and joint programs with government, industry, and academia. ARL’s Open Campus is a collaborative endeavor, with the goal of building a science and technology ecosystem that will encourage groundbreaking advances in basic and applied research areas of relevance to the Army. Through the Open Campus framework, ARL scientists and engineers work collaboratively and side-by-side with visiting scientists in ARL's facilities, and as visiting researchers at collaborators' institutions. Through the relationships formed and the availability of secure, dynamic and scalable environments, rapid development, sharing and transition of technologies is possible. This technical paper proposes a cloud-based security architecture to support multiple Open Campus Initiatives at the Army Research Laboratory including the Sensor Information Testbed COllaborative Research Environment (SITCORE) and the Automated Online Data Repository (AODR). These initiatives create a highly-collaborative research laboratory and testbed environment focused on sensor data and information fusion. Coupling the existing Open Campus Initiatives with an additional cloud-based architecture allowing encrypted communication, authentication, and on-demand access provides a scalable and secure environment supporting the data, algorithm, and collaborative needs of scientists, researchers and entrepreneurs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integrating data about plans and artifact specifications with data about the actual instances of the entities prescribed by
these provides numerous benefits for tasks such as mission planning, sensor assignment, and asset tasking. However,
doing so raises several issues for data ingest, storage and analytics if a consistent semantics is to be maintained to enable
extensible and unanticipated querying. In this paper, we examine strategies for overcoming these challenges and describe
a method for using the Common Core Ontologies and Modal Relation Ontology to map and integrate data about planned
and existing entities. We demonstrate a solution for ensuring reliable, dynamic and extensible data queries suitable for
highly heterogeneous data sources that is agnostic to implementation requirements. We focus on examples relevant to
sensor capabilities, selection and tasking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tactical communications at the dismount in 2017 frequently have bandwidths measured in kilobits per second (kbps), are intermittent in nature, and have "real time" latencies measured in seconds. These tactical performance characteristics to and from the dismount will be challenged in the near future by several emerging technologies that increase the Velocity of Information (VOI) delivery and the global access to it via broadband SATCOM based smart phones to drive improved VOIeu (Value of Information to the end user). This paper briefly examines the current state of the art in global SATCOM based digital video ISR/RSTA latency. Then, examines two emerging technologies that are viewed as disruptive to that current delivery model. The first technology is the advent of a 5G LEO SATCOM networks in 2020 offering far less latency in data transmission, while enabling a globally accessible Giga Byte per second GBps) broad band connection from handhelds (SpaceX, Boeing, OneWeb, et. al.). The other technology is an emerging digital video compression algorithm (KT-Tech) that offers far less latency than what currently is thought of as "real time" in 2017. Together, they have the potential for enabling bi-directional broadband ISR/RSTA to or from a dismounts hand held receiver, while providing the performance required for latency intolerant teleoperation needs such as: CONUS based control of high speed low altitude drones based overseas; LOS (Line of Sight) and BLOS (Beyond Line of Sight) control of Remote Weapons Systems; tactical videoconferences (VTCs); Human-in-the-Loop robotic MUMT teleoperation, and Telesurgery. This capability also potentially offers miniaturization that would enable Group IV Predator like "remote split operation” teleoperation and ISR collection conducted over SATCOM to be extended to Group I-II hand launched drones. As part of the presentation, a theoretical use case where Predator like SATCOM based digital HD video transmission performance for ISR/RSTA is moved instead to a Group I-II hand launched UAS is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the proliferation of smart devices, it is increasingly important to exploit their computing, networking, and storage resources for executing various computing tasks at scale at mobile network edges, bringing many benefits such as better response time, network bandwidth savings, and improved data privacy and security. A key component in enabling such distributed edge computing is a mechanism that can flexibly and dynamically manage edge resources for running various military and commercial applications in a manner adaptive to the fluctuating demands and resource availability. We present methods and an architecture for the edge resource management based on machine learning techniques. A collaborative filtering approach combined with deep learning is proposed as a means to build the predictive model for applications’ performance on resources from previous observations, and an online resource allocation architecture utilizing the predictive model is presented. We also identify relevant research topics for further investigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the emergence of Internet of Things (IoT) and edge computing applications, data is often generated by sensors and end users at the network edge, and decisions are made using these collected data. Edge devices often require cloud services in order to perform intensive inference tasks. Consequently, the inference of deep neural network (DNN) model is often partitioned between the edge and the cloud. In this case, the edge device performs inference up to an intermediate layer of the DNN, and offloads the output features to the cloud for the inference of the remaining of the network. Partitioning a DNN can help to improve energy efficiency but also rises some privacy concerns. The cloud platform can recover part of the raw data using intermediate results of the inference task. Recently, studies have also quantified an information theoretic trade-off between compression and prediction in DNNs. In this paper, we conduct a simple experiment to understand to which extent is it possible to reconstruct the raw data given the output of an intermediate layer, in other words, to which extent do we leak private information when sending the output of an intermediate layer to the cloud. We also present an overview of mutual-information based studies of DNN, to help understand information leakage and some potential ways to make distributed inference more secure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning approaches like deep neural networks have proven to be very successful in many domains. However, they require training on a huge volumes of data. While these approaches work very well in a few selected domains where a large corpus of training data exists, they shift the bottleneck in development of machine learning applications to the data acquisition phase and are difficult to use in domains where training data is hard to acquire. For sensor fusion applications in coalition operations, access to good training data that will be suitable for real-life applications is hard to get. The training data sets available are limited in size. For these domains, we need to explore approaches for machine learning which can work with small amounts of data. In this paper, we will look at the current and emerging approaches which allow us to build machine learning models when access to the training data is limited. The approaches examined include statistical machine learning, transfer learning, synthetic data generation, semi-supervised learning and one-shot learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed software-defined networking (SDN), which consists of multiple inter-connected network domains and each managed by one SDN controller, is an emerging networking architecture that offers balanced centralized/distributed control. Under such networking paradigm, resource management among various domains (e.g., optimal resource allocation) can be extremely challenging. This is because many tasks posted to the network require resources (e.g., CPU, memory, bandwidth, etc.) from different domains, where cross-domain resources are correlated, e.g., their feasibility depends on the existence of a reliable communication channel connecting them. To address this issue, we employ the reinforcement learning framework, targeting to automate this resource management and allocation process by proactive learning and interactions. Specifically, we model this issue as an MDP (Markov Decision Process) problem with different types of reward functions, where our objective is to minimize the average task completion time. Regarding this problem, we investigate the scenario where the resource status among controllers is fully synchronized. Under such scenario, the SDN controller has complete knowledge of the resource status of all domains, i.e., resource changes upon any policies are directly observable by controllers, for which Q-learning-based strategy is proposed to approach the optimal solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss uncertainty quantification in multisensor data integration and analysis, including estimation methods and the role of uncertainty in decision making and trust in automated analytics. The challenges associated with automatically aggregating information across multiple images, identifying subtle contextual cues, and detecting small changes in noisy activity patterns are well-established in the intelligence, surveillance, and reconnaissance (ISR) community. In practice, such questions cannot be adequately addressed with discrete counting, hard classifications, or yes/no answers. For a variety of reasons ranging from data quality to modeling assumptions to inadequate definitions of what constitutes “interesting” activity, variability is inherent in the output of automated analytics, yet it is rarely reported. Consideration of these uncertainties can provide nuance to automated analyses and engender trust in their results. In this work, we assert the importance of uncertainty quantification for automated data analytics and outline a research agenda. We begin by defining uncertainty in the context of machine learning and statistical data analysis, identify its sources, and motivate the importance and impact of its quantification. We then illustrate these issues and discuss methods for data-driven uncertainty quantification in the context of a multi- source image analysis example. We conclude by identifying several specific research issues and by discussing the potential long-term implications of uncertainty quantification for data analytics, including sensor tasking and analyst trust in automated analytics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss the problem of distributed learning for coalition operations. We consider a scenario where different coalition forces are running learning systems independently but want to merge the insights obtained from all the learning systems to share knowledge and use a single model combining all of their individual models. We consider the challenges involved in such fusion of models, and propose an algorithm that can find the right fused model in an efficient manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wide array of military and commercial applications rely on the collection and processing of audio data. One approach to perform analytics and machine learning on such data is to upload and process them at a central server (e.g., cloud) which offers abundant processing resources and the ability to run sophisticated machine learning models and analytics on the audio data. This approach can be inefficient due to the low bandwidth and energy limitations of mobile devices as well as intermittent connectivity to a central collection point such as the cloud. It is also problematic as audio data are often highly sensitive and subject to privacy constraints. An alternative approach is to perform audio analytics at edge of the network where data is generated. The challenge in this approach is the requirement to perform analytics subject to resource constraints which limit performance and accuracy of predictive analytics. In this paper, we present a system for performing predictive analytics on audio data, where the training is executed on the cloud and the classification can be executed at the edge. We present the design principles and architecture of the system, and quantify the performance tradeoff of executing analytics at contemporary edge devices versus the cloud.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Concepts: Joint Session with conferences 10653 and 10635
A alternative perspective on the paradigm that should be pursued
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Analytics: Joint Session with conferences 10635 and 10653
The variety of data structures used by NoSQL databases (e.g. key-value, document, triple store, graph) is evidence of the variety of ways in which data is used within an enterprise. Data in triple-stores that are aligned to semantically rich ontologies are useful for discovery and all-source analysis while data in key-value stores are useful for massive scale high-speed computing. Within an enterprise that utilizes multiple types of NoSQL databases the exchange of data from one to another, if accomplished, usually comes at the cost of losing some amount of content. A methodology of lossless data exchange that utilizes semantic metadata libraries is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Technical Cooperation Program (TTCP) is a science and technology organization organized to meet the collective needs of the English-speaking nations of the free world. These nations’ Intelligence communities are commonly referred to as the ‘Five Eyes’ (FVEY) and include Australia (AU), Canada (CA), New Zealand (NZ), United Kingdom (UK), and United States (US). TTCP was created in 1957 between UK and US, with CA joining shortly after and AU and NZ joining in 1965 and 1969, respectively. Eleven Groups oversee subordinate Technical Panels (TPs) to perform collaborative research and development for emerging Defense challenges. The material presented in this paper is undertaken by TP41, All-Source Analytics, operating under the Command, Control, Communications, Cyber, and Information (C4I) Group. The C4I Group is chartered to advance current and future warfighting capabilities through collaborative innovative research in Command and Control (C2), communications, cyber and information systems. Similarly, the mission of TP41 is to develop and integrate technologies for multi-INT and open source data analytics that are responsive to operational queries arising from hybrid and traditional warfighting challenges. This will be achieved by processing all-source data, exploiting informatics, and disseminating knowledge. This paper details study assignments and projects designed to achieve those goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Technical Cooperation Program (TTCP) Contested Urban Environment (CUE) 2017 Experiment was con- ducted to explore and evaluate technologies that can enhance close combat capabilities in contested urban environments through the exploitation of airborne intelligence, surveillance, and reconnaissance (ISR) capabilities and ground sensors. This paper focuses on case studies and an evaluation of the interoperability standard between all coalition systems chosen for this event, OSUS. The Open Standard for Unattended Sensors (OSUS) is an interoperability architecture for unattended ground sensor (UGS) controllers. The U.S. Army Research Laboratory continues to develop and improve the OSUS standard, and as part of research on interoperability, participates in a variety of experiments, demonstrations and exercises. The United States provided ground sensors and a Command and Control (C2) station, Australia provided airborne sensors and a C2 station, and Canada provided C2 workstations along with a suite of ground sensors. Partner nations attended an OSUS workshop early in 2017 at the ARL, Adelphi. MD. USA campus. This provided a chance for hands-on instruction in OSUS fundamentals and the programming of OSUS controllers and interfaces. The difficulty of adding an OSUS interface into a sensor or C2 system, the challenges and benefits of using OSUS during a coalition event, and the overall effectiveness of the implementation for this specific experiment were examined. The average amount of time to implement an OSUS interface for a sensor or a C2 station was two weeks. The integration phase was fast and seamless after a single day of integration and testing, five of six tested systems were fully operational and the sixth was missing only one function. Several shortcomings of the data model were uncovered, which was to be expected as the data model was developed for Unattended Ground Sensors (UGS) and not airborne platforms. Overall, OSUS provided a robust and reliable means of communication between each of the systems. TTCP/CUE is an ongoing study and a similar event is planned for Montreal, Canada in 2018.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AI (Artificial Intelligence)-based algorithms have great potential for inter-operation of coalition ISR (intelligence, surveillance, and reconnaissance) systems, but rely on realistic data for training and validation. Getting such data for coalition scenarios is hampered by military regulations and is a significant hurdle in conducting basic research. We discuss an approach whereby training data can be obtained by means of scenario-driven simulations, which result in traces for network devices, ISR sensors and other infrastructure components. This generated data can be used for both training and comparison of different AI based algorithms. Coupling the synthetic data generator with a data curation system further increases its applicability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situational understanding requires an ability to assess the current situation and anticipate future situations, requiring both pattern recognition and inference. A coalition involves multiple agencies sharing information and analytics. This paper considers how to harness distributed information sources, including multimodal sensors, together with machine learning and reasoning services, to perform situational understanding in a coalition context. To exemplify the approach we focus on a technology integration experiment in which multimodal data — including video and still imagery, geospatial and weather data — is processed and fused in a service-oriented architecture by heterogeneous pattern recognition and inference components. We show how the architecture: (i) provides awareness of the current situation and prediction of future states, (ii) is robust to individual service failure, (iii) supports the generation of ‘why’ explanations for human analysts (including from components based on ‘black box’ deep neural networks which pose particular challenges to explanation generation), and (iv) allows for the imposition of information sharing constraints in a coalition context where there is varying levels of trust between partner agencies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coalition operations of the future will see an increased use of autonomous vehicles, mules and UAVs in different kinds of contexts. Because of the scalability and dynamicity of operations at the tactical edge, such vehicles along with the supporting infrastructure at base-camps and other forward operating bases would need to support an increased degree of autonomy. In this paper, we look at one specific scenario where a surveillance mission needs to be performed sharing resources borrowed from multiple coalition partners. In such an environment, experts who can define security and other types of policies for devices are hard to find. One way to address this problem is to use generative policies – an approach where the devices generate policies for their operations themselves without requiring human involvement as the configuration of the system changes. We show how access control policies can be created automatically by the different devices involved in the mission, with only high-level guidance provided by humans. The generative policy architecture can enable rapid reconfiguration of security policies needed to address dynamic changes from features such as auto-scaling. It can also support improved security in coalition contexts by enabling the solutions to use approaches like moving target defense. In this paper, we would discuss a general architecture which allows the generative policy approach to be used in many different situations, a simulation implementation of the architecture and lessons learnt from the implementation of the simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Development and implementation of an effective imaging ISR system involves multiple design tradeoffs and proper specification of many interdependent system parameters. This paper discusses some of the trades and design considerations that are integral to the development of an optimized imaging ISR system. System parameters such as Atmospheric Effects, Waveband, Field of View, Resolution, proper Eye‐to‐Display Matching, and Size, Weight, Power‐Cost (SWAP‐C) will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous unmanned aerial vehicles (UAVs) present an increasingly viable threat vector to the Defense com- munity. Existing response systems are vulnerable to saturation attacks of large swarms of low-cost autonomous vehicles. One method of reducing this threat is the use of an intelligent counter swarm with tactics, navigation and planning capabilities for engaging the adversarial swarm. Though previous studies exist that have produced libraries of basic fighter tactics employable by unmanned fixed-wing aircraft, we are aware of little prior work that explores close-in tactical engagements at a large scale (teams of at least size 10). We examine existing technologies that can be applied in fixed-wing swarm-versus-swarm engagement, including classic pursuit-evasion strategies and the application of Lanchester's laws for attrition calculations. Our recent studies center on lever- aging existing manned fighter combat doctrine, and on the benefits of collaboration. We consider experiments in close-air combat against adversaries capable of destroying aerial targets. The following work employs both a Monte Carlo analysis in a simulation environment to measure the effectiveness of several autonomous tactics, as well as an analysis of live flight experiments in swarm competitions with up to 10 vs. 10 scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the regulations pertaining to beyond line of sight and higher flight altitudes relax, medium-size fixed-wing UAVs such as the ScanEagle® will see a growing market for large area coverage. The information required by users of these platforms will include high-resolution 2D imagery, hyperspectral imagery, and 3D imagery. In many cases, timeliness of data will be critical. Such scenarios could include disaster response, fire monitoring, mission planning, rail line and pipe line monitoring, and general ISR. Ball Aerospace has recently repackaged their TotalSight Flash LIDAR system for operation on the Insitu ScanEagle. This system provides near real-time color 3D georegistered information to the flight operator. In addition, all data is stored on board and is accessible by single or multiple ground users via the ScanEagle Wave Relay data link, allowing the user to see all areas collected and to pull specific 3D data as needed - much like how one uses Google Earth. This paper describes the LIDAR Assembly for ScanEagle (LASE), its operation, and the collected 3D LIDAR imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optimization of Information Sources: The Magic Rabbits
This paper will present an overview of the technical perspectives and innovative approaches in implementing a mission and means framework (MMF) and a new ontological approach for information exploitation that is utilized in developing an operation tool that will help the battlefield decision makers get the best available mission-relevant information to inform their decisions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proliferation of new sensors and use of mobile devices result in the production of an overwhelming amount of sensed data. At the same time, the limited quantity and capabilities of intelligence, surveillance and reconnaissance (ISR) resources for the amount of requests for information collection requires to maximize their utilization in order to increase the accuracy of information gain and timely delivery of information. This paper presents ongoing research to optimize the exploitation of a diversity of information sources in support of information collection. The aim is to develop decision aids exploiting semantic models of the sources and the type of information produced in order to provide information consumers with answers that best meet their information requirements. Enhanced ISR assets visibility and optimized information collection result in more relevant collected data and improve subsequent analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent events in the AFRICOM Area of Operations have once again demonstrated the critical importance of satisfying decision maker information requirements in the most timely and accurate fashion possible. Our team is supporting an Army effort to develop a tool that will utilize a modified version of the Missions and Means Framework (MMF) to automatically map information requirements to available information sources capable of delivering the required information. This paper focuses on the process used to decompose high level information requirements (e.g,, Commander’s Critical Information Requirements (CCIRs)) into Specific Information Requirements (SIRs) that can be compared to the information content produced by specific information sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to collect an ever-increasing amount of information is outpacing analysts’ ability to interpret and communicate that information in a timely manner. The Army Research Laboratory’s (ARL’s) Signal and Image Processing (SIP) Division is presently engaged in research to develop a system-of-systems methodology designed around a Mission & Means Framework (MMF), a robust, rapid-reaction, autonomous information-generating tool that can provide the mission-relevant information/intelligence that Commanders need to make winning decisions on the battlefield. The MMF provides a structure that enables the optimal allocation of available information sources to capture and exploit Mission-Informed Needed Information based on Discoverable, Available Sensing Sources (MINI-DASS). In this paper, we describe an MMF operator that matches information needs to information means using ontologies that describe both the information requirements and the information sources. We then describe two different multi-objective optimization techniques to effectively explore the large, complex search space o possible matches to discover suitable solutions that match available information-source means to satisfy mission needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our recent experience working as part of a large team has enabled us to identify the role that terminology development plays in team coordination, idea generation, and software development. This paper provides a summary of our findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While demonstrating the use of an Unmanned Air Vehicle (UAV) to provide a communications relay for surfaced unmanned underwater vehicles (UUVs) at Unmanned Warrior 2016, it became apparent that a closer look was needed to determine the optimal protocol to transfer automatic target recognition image files. Initial efforts with User Datagram Protocol (UDP) were highly unreliable with a very low success rate, despite extensive radio configuration improvements. Afterwards, several protocols were evaluated and UDP-based Data Transfer Protocol (UDT) seemed to provide a good hybrid of the standard transport layer protocols UDP and Transmission Control Protocol (TCP). UDT is an application layer protocol built on UDP with some TCP-like reliability characteristics. This paper documents a careful comparison study of UDP, TCP, and UDT that was performed to determine the optimal protocol for this form of data transfer. Each protocol was used to transfer image files, first in a baseline configuration, then with network emulation packet loss, and finally with real data radios in ideal and then more realistic conditions. Despite that fact that UDT was originally designed to transfer large data sets over high-bandwidth networks, this study demonstrated that it is also ideal for transporting relatively small data sets in high packet loss environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many companies rely on user experience metrics, such as Net Promoter scores, to monitor changes in customer attitudes toward their products. This paper suggests that similar metrics can be used to assess the user experience of the pilots and sensor operators who are tasked with using our radar, EO/IR, and other remote sensing technologies. As we have previously discussed, the problem of making our national security remote sensing systems useful, usable and adoptable is a human-system integration problem that does not get the sustained attention it deserves, particularly given the high- throughput, information-dense task environments common to military operations. In previous papers, we have demonstrated how engineering teams can adopt well-established human-computer interaction principles to fix significant usability problems in radar operational interfaces. In this paper, we describe how we are using a combination of Situation Awareness design methods, along with techniques from the consumer sector, to identify opportunities for improving human-system interactions. We explain why we believe that all stakeholders in remote sensing – including program managers, engineers, or operational users – can benefit from systematically incorporating some of these measures into the evaluation of our national security sensor systems. We will also provide examples of our own experience adapting consumer user experience metrics in operator-focused evaluation of currently deployed radar interfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Usage of high resolution and high frame rate cameras in multi-sensor imaging systems has led to increased need for high computational power demanding video signal processing. One very strict requirement is to minimize payload weight on pan-tilt platforms (PT), that prevents installation of high power computers on rotation part of PT. Instead, raw video signal from multiple cameras should be transmitted via PT slip ring to the processing board which is installed as a stationary equipment. The required capacity of this communication link can easily reach values of multiple Gbps which would yield to use of very expensive slip rings with strictly impedance controlled copper contacts or even fiber optics. Additionally, the lifetime of such slip ring is much shorted than low capacity slip rings. In this paper we propose the solution that uses slip ring central opening as a circular waveguide for radio transmission of video signal. This concept originates from radars' rotary joints circuitry which additionally should transmit wideband signal. The main focus is in this paper is on simple and cost effective implementation based on FPGA serializer and deserializer as signal processing components. The coupling between FPGA and circular waveguide is provided by passive circuits and amplifiers. We have chosen coding and modulation suitable for this implementation that enables efficient digital video signal transmission over circular-waveguide based slip ring with bandpass characteristic. We have presented measurement results of 3Gbps transmission system that uses waveguide designed for cut off frequency of 10.7 GHz. The remarks about scaling this solution to different central frequencies and different bandwidths are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computational framework is described for modeling acoustic and radio-frequency (RF) signal propagation in complex environments, such as urban, mountainous, and forested terrain. In such environments, the influences of three-dimensional atmospheric fields and terrain variations must be addressed. The approach described here involves creation of a full environmental data representation (abstraction layer), which can be initialized with many different environmental data resources, including weather forecasts, digital terrain elevations, landcover types, and soil properties. The environmental representation is then converted into the parameters needed for particular signal modalities and classes of propagation algorithms. In this manner, execution of the signal propagation calculations is isolated from the sources of environmental data, so that all models will function with all types of environmental data. The formulation of the acoustic (infrasound and audible) and RF (VHF/UHF/SHF) feature spaces is also described. Example calculations involving infrasound propagation with 3D weather fields and RF propagation in mountainous terrain are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increased focus on making cities “smarter”, we see an upsurge in investment in sensing technologies embedded in the urban infrastructure. The deployment of GPS sensors aboard taxis and buses, smartcards replacing paper tickets, and other similar initiatives have led to an abundance of data on human mobility, generated at scale and available real-time. Further still, users of social media platforms such as Twitter and LBSNs continue to voluntarily share multimedia content revealing in-situ information on their respective localities. The availability of such longitudinal multimodal data not only allows for both the characterization of the dynamics of the city, but also, in detecting anomalies, resulting from events (e.g., concerts) that disrupt such dynamics, transiently. In this work, we investigate the capabilities of such urban sensor modalities, both physical and social, in detecting a variety of local events of varying intensities (e.g., concerts) using statistical outlier detection techniques. We look at loading levels on arriving bus stops, telecommunication records and taxi trips, accrued via the public APIs made available through the local transport authorities from Singapore and New York City, and Twitter/Foursquare check-ins collected during the same period, and evaluate against a set of events assimilated from multiple event websites. In particular, we report on our early findings on (1) the spatial impact evident via each modality (i.e., how far from the event venue is the anomaly still present), and (2) the utility in combining decisions from the collection of sensors using rudimentary fusion techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.