PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10190 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Internet of Things (IoT) integrates a variety of different devices that provide more information than can currently be easily handled. While there is much good there are also many problems in the IoT world and not all of the potential solutions can be used in the unique environment of the military. The tactical edge of the military is an even harsher environment with both constrained communications and resources but still having requirements to process data in real time for improved command decisions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In August, 2016, the Army Research Laboratory (ARL) participated in Enterprise Challenge 2016 at Ft. Huachuca. Incorporating an Expeditionary Processing, Exploitation and Dissemination (Ex-PED) model, ARL demonstrated the utility of tactical wide-area and persistent sensing in a bandwidth constrained environment, the effectiveness of a Sensor 3-D Common Operating Picture (COP) to enable appropriate sensor management, and the ability to seamlessly incorporate coalition sensors into the ARL exercise enterprise as well as the broader event enterprise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Open Standard for Unattended Sensors (OSUS) was developed by DIA and ARL to provide a plug-n-play platform for sensor interoperability. Our objective is to use the standardized data produced by OSUS in performing data analytics on information obtained from various sensors. Data analytics can be integrated in one of three ways: within an asset itself; as an independent plug-in designed for one type of asset (i.e. camera or seismic sensor); or as an independent plug-in designed to incorporate data from multiple assets. As a proof-of-concept, we develop a model that can be used in the second of these types – an independent component for camera images. The dataset used was collected as part of a demonstration and test of OSUS capabilities. The image data includes images of empty outdoor scenes and scenes with human or vehicle activity. We design, test, and train a convolution neural network (CNN) to analyze these images and assess the presence of activity in the image. The resulting classifier labels input images as empty or activity with 86.93% accuracy, demonstrating the promising opportunities for deep learning, machine learning, and predictive analytics as an extension of OSUS’s already robust suite of capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is envisioned that the success of future military operations depends on the better integration, organizationally and operationally, among allies, coalition members, inter-agency partners, and so forth. However, this leads to a challenging and complex environment where the heterogeneity and dynamism in the operating environment intertwines with the evolving situational factors that affect the decision-making life cycle of the war fighter. Therefore, the users in such environments need secure, accessible, and resilient information infrastructures where policy-based mechanisms adopt the behaviours of the systems to meet end user goals. By specifying and enforcing a policy based model and framework for operations and security which accommodates heterogeneous coalitions, high levels of agility can be enabled to allow rapid assembly and restructuring of system and information resources. However, current prevalent policy models (e.g., rule based event-condition-action model and its variants) are not sufficient to deal with the highly dynamic and plausibly non-deterministic nature of these environments. Therefore, to address the above challenges, in this paper, we present a new approach for policies which enables managed systems to take more autonomic decisions regarding their operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The growing arsenal of network-centric sensor platforms shows great potential to enhance situational awareness capabilities. Non-traditional sensors collect a diverse range of data that can provide a more accurate and comprehensive common operational picture when combined with conventional intelligence, surveillance, and reconnaissance (ISR) products. One of the integration challenges is mediating differences in terminology that different data providers use to describe the data they have extracted. A data consumer should be able to reference information using the vocabulary that they are familiar with and rely on the framework to handle the mediation; for example, it should be up to the framework to identify that two different terms are synonyms for the same concept. In this paper we present an approach for automatically performing this alignment using authoritative sources such as Wikipedia (a stand-in for the Intellipedia wiki), and present experimental results that demonstrate that this approach is able to align a large number of concepts between different terminologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The success of future military coalition operations—be they combat or humanitarian—will increasingly depend on a system’s ability to share data and processing services (e.g. aggregation, summarization, fusion), and automatically compose services in support of complex tasks at the network edge. We call such an infrastructure instinctive—i.e., an infrastructure that reacts instinctively to address the analytics task at hand. However, developing such an infrastructure is made complex for the coalition environment due to its dynamism both in terms of user requirements and service availability.
In order to address the above challenge, in this paper, we highlight our research vision and sketch some initial solutions into the problem domain. Specifically, we propose means to (1) automatically infer formal task requirements from mission specifications; (2) discover data, services, and their features automatically to satisfy the identified requirements; (3) create and augment shared domain models automatically; (4) efficiently offload services to the network edge and across coalition boundaries adhering to their computational properties and costs; and (5) optimally allocate and adjust services while respecting the constraints of operating environment and service fit. We envision that the research will result in a framework which enables self-description, discover, and assemble capabilities to both data and services in support of coalition mission goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the wake of rapid maturing of Internet of Things (IoT) approaches and technologies in the commercial sector, the IoT is increasingly seen as a key ‘disruptive’ technology in military environments. Future operational environments are expected to be characterized by a lower proportion of human participants and a higher proportion of autonomous and semi-autonomous devices. This view is reflected in both US ‘third offset’ and UK ‘information age’ thinking and is likely to have a profound effect on how multinational coalition operations are conducted in the future. Much of the initial consideration of IoT adoption in the military domain has rightly focused on security concerns, reflecting similar cautions in the early era of electronic commerce. As IoT approaches mature, this initial technical focus is likely to shift to considerations of interactivity and policy. In this paper, rather than considering the broader range of IoT applications in the military context, we focus on roles for IoT concepts and devices in future intelligence, surveillance and reconnaissance (ISR) tasks, drawing on experience in sensor-mission resourcing and human-computer collaboration (HCC) for ISR. We highlight the importance of low training overheads in the adoption of IoT approaches, and the need to balance proactivity and interactivity (push vs pull modes). As with sensing systems over the last decade, we emphasize that, to be valuable in ISR tasks, IoT devices will need a degree of mission-awareness in addition to an ability to self-manage their limited resources (power, memory, bandwidth, computation, etc). In coalition operations, the management and potential sharing of IoT devices and systems among partners (e.g., in cross-coalition tactical-edge ISR teams) becomes a key issue due heterogeneous factors such as language, policy, procedure and doctrine. Finally, we briefly outline a platform that we have developed in order to experiment with human-IoT teaming on ISR tasks, in both physical and virtual settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Army Research Lab’s Open Standard for Unattended Sensors (OSUS) is an extensible vendor-neutral sensor controller. OSUS provides a simple user interface for connecting to sensors and a rudimentary data processing capability. Integrating Open Source Internet of Things (IoT) technology such as Node Red can greatly extend the capabilities of OSUS and improve the User Experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the advancement in the technology, hype of connected devices (hence forth referred to as IoT) in support of automating the functionality of many domains, be it intelligent manufacturing or smart homes, have become a reality. However, with the proliferation of such connected and interconnected devices, efficiently and effectively managing networks manually becomes an impractical, if not an impossible task. This is because devices have their own obligations and prohibitions in context, and humans are not equip to maintain a bird’s-eye-view of the state. Traditionally, policies are used to address the issue, but in the IoT arena, one requires a policy framework in which the language can provide sufficient amount of expressiveness along with efficient reasoning procedures to automate the management. In this work we present our initial work into creating a scalable knowledge-based policy framework for IoT and demonstrate its applicability through a smart home application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting anomalies is important for continuous monitoring of sensor systems. One significant challenge is to use sensor data and autonomously detect changes that cause different conditions to occur. Using deep learning methods, we are able to monitor and detect changes as a result of some disturbance in the system. We utilize deep neural networks for sequence analysis of time series. We use a multi-step method for anomaly detection. We train the network to learn spectral and temporal features from the acoustic time series. We test our method using fiber-optic acoustic data from a pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Singapore’s “smart city” agenda is driving the government to provide public access to a broader variety of urban informatics sources, such as images from traffic cameras and information about buses servicing different bus stops. Such informatics data serves as probes of evolving conditions at different spatiotemporal scales. This paper explores how such multi-modal informatics data can be used to establish the normal operating conditions at different city locations, and then apply appropriate outlier-based analysis techniques to identify anomalous events at these selected locations. We will introduce the overall architecture of sociophysical analytics, where such infrastructural data sources can be combined with social media analytics to not only detect such anomalous events, but also localize and explain them. Using the annual Formula-1 race as our candidate event, we demonstrate a key difference between the discriminative capabilities of different sensing modes: while social media streams provide discriminative signals during or prior to the occurrence of such an event, urban informatics data can often reveal patterns that have higher persistence, including before and after the event. In particular, we shall demonstrate how combining data from (i) publicly available Tweets, (ii) crowd levels aboard buses, and (iii) traffic cameras can help identify the Formula-1 driven anomalies, across different spatiotemporal boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The presented work is an extension of previous work carried out at A.U.G. Signals Ltd. The problem is approached herein for vessel identification/verification using Deep Learning Neural Networks in a persistent surveillance scenario. Using images with vessels in the scene, Deep Learning Neural Networks were set up to detect vessels from still imagery (visible wavelength). Different neural network designs were implemented for vessel detection and compared based on learning performance (speed and demanded training sets) and estimation accuracy. Unique features from these designs were taken to create an optimized solution. This paper presents a comparison of the deep learning approaches implemented and their relative capabilities in vessel verification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Behavioral Analytics (BA) relies on digital breadcrumbs to build user profiles and create clusters of entities that exhibit a large degree of similarity. The prevailing assumption is that an entity will assimilate the group behavior of the cluster it belongs to. Our understanding of BA and its application in different domains continues to evolve and is a direct result of the growing interest in Machine Learning research. When trying to detect security threats, we use BA techniques to identify anomalies, defined in this paper as deviation from the group behavior. Early research papers in this field reveal a high number of false positives where a security alert is triggered based on deviation from the cluster learned behavior but still within the norm of what the system defines as an acceptable behavior. Further, domain specific security policies tend to be narrow and inadequately represent what an entity can do. Hence, they: a) limit the amount of useful data during the learning phase; and, b) lead to violation of policy during the execution phase. In this paper, we propose a framework for future research on the role of policies and behavior security in a coalition setting with emphasis on anomaly detection and individual's deviation from group activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human-machine Interface and Machine Learning Approaches I
In this paper, we address the needed components to create usable engineering and operational user interfaces (UIs) for airborne Synthetic Aperture Radar (SAR) systems. As airborne SAR technology gains wider acceptance in the remote sensing and Intelligence, Surveillance, and Reconnaissance (ISR) communities, the need for effective and appropriate UIs to command and control these sensors has also increased. However, despite the growing demand for SAR in operational environments, the technology still faces an adoption roadblock, in large part due to the lack of effective UIs. It is common to find operational interfaces that have barely grown beyond the disparate tools engineers and technologists developed to demonstrate an initial concept or system. While sensor usability and utility are common requirements to engineers and operators, their objectives for interacting with the sensor are different. As such, the amount and type of information presented ought to be tailored to the specific application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human-machine Interface and Machine Learning Approaches II
Knowledge bases for decision support systems are growing increasingly complex, through continued advances in data ingest and management approaches. However, humans do not possess the cognitive capabilities to retain a bird’s-eyeview of such knowledge bases, and may end up issuing unsatisfiable queries to such systems. This work focuses on the implementation of a query reformulation approach for graph-based knowledge bases, specifically designed to support the Resource Description Framework (RDF). The reformulation approach presented is instance-and schema-aware. Thus, in contrast to relaxation techniques found in the state-of-the-art, the presented approach produces in-context query reformulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processing remote sensing data is the epitome of computation, yet real-time collection systems remain human-labor intensive. Operator labor is consumed by both overhead tasks (cost) and value-added production (benefit). In effect, labor is taxed and then lost. When an operator comes on-shift, they typically duplicate setup work that their teammates have already performed many times. “Pass down” of state information can be difficult if security restrictions require total logouts and blank screens - hours or even days of valuable history and context are lost. As work proceeds, duplicative effort is common because it is typically easier for operators to “do it over” rather than share what others have already done. As we begin a major new system version, we are refactoring the user interface to reduce time and motion losses. Working with users, we are developing “click budgets” to streamline interface use. One basic function is shared clipboards to reduce the use of sticky notes and verbal communication of data strings. We illustrate two additional designs to share work: window copying and window sharing. Copying (technically, shallow or deep object cloning) allows any system user to duplicate a window and configuration for themselves or another to use. Sharing allows a window to have multiple users: shareholders with read-write functionality and viewers with read-only. These solutions would allow windows to persist across multiple shifts, with a rotating cast of shareholders and viewers. Windows thus become durable objects of shared effort and persistent state. While these are low-tech functions, the cumulative labor savings in a 24X7 crew position (525,000 minutes/year spread over multiple individuals) would be significant. New design and implementation is never free and these investments typically do not appeal to government acquisition officers with short-term acquisition-cost concerns rather than a long-term O and M (operations and maintenance) perspective. We share some successes in educating some officers, in collaboration with system users, about the human capital involved in operating the systems they are acquiring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Even as remote sensing technology has advanced in leaps and bounds over the past decade, the remote sensing community lacks interfaces and interaction models that facilitate effective human operation of our sensor platforms. Interfaces that make great sense to electrical engineers and flight test crews can be anxiety-inducing to operational users who lack professional experience in the design and testing of sophisticated remote sensing platforms. In this paper, we reflect on an 18-month collaboration which our Sandia National Laboratory research team partnered with an industry software team to identify and fix critical issues in a widely-used sensor interface. Drawing on basic principles from cognitive and perceptual psychology and interaction design, we provide simple, easily learned guidance for minimizing common barriers to system learnability, memorability, and user engagement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection, Tracking and Localization for Persistent Surveillance
This paper presents results of the Canadian-German research project PASSAGES (Protection and Advanced Surveillance System for the Arctic: Green, Efficient, Secure)1 on an advanced surveillance system for safety and security of maritime operations in Arctic areas. The motivation for a surveillance system of the Northwest Passage is the projected growth of maritime traffic along Arctic sea routes and the need for securing Canada's sovereignty by controlling its arctic waters as well as for protecting the safety of international shipping and the intactness of the arctic marine environment. To ensure border security and to detect and prevent illegal activities it is necessary to develop a system for surveillance and reconnaissance that brings together all related means, assets, organizations, processes and structures to build one homogeneous and integrated system. The harsh arctic conditions require a new surveillance concept that fuses heterogeneous sensor data, contextual information, and available pre-processed surveillance data and combines all components to efficiently extract and provide the maximum available amount of information. The fusion of all these heterogeneous data and information will provide improved and comprehensive situation awareness for risk assessment and decision support of different stakeholder groups as governmental authorities, commercial users and Northern communities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been interested in the analytical and experimental study of real-life bird song sources for several years. Bird sources are characterized by either a single or multiple bird vocalizations independent of each other or in response to others. The sources may be physically-stationary or exhibit movements and the signals are wide-band in frequency and often intermittent with pauses and possibly restarting with repeating previously used songs or with new songs. Thus, the detection, classification, and 2D or 3D localization of these birds pose challenging signal and array problems. Due to the fact that some birds can mimic other birds, time-domain waveform characterization may not be sufficient for determining the number of birds. Similarly, due to the intermittent nature of the vocalizations, data collected over a long period cannot be used naively. Thus, it is necessary to use short-time Fourier transform (STFT) to fully exploit the intricate natures of the time and frequency properties of these sources and displayed on a spectrogram. Various dominant spectral data over the relevant frames are used to form sample covariance matrices. Eigenvectors associated with the decompositions of these matrices for these spectral indices can be used to provide 2D/3D DOA estimations of the sources over different frames for intermittent sources. Proper cluttering of these data can be used to perform enhanced detection, classification, and localization of multiple bird sources. Two sets of collected bird data will be used to demonstrate these claims.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present our study on track classification by taking into account environmental information and target estimated states. The tracker uses several motion model adapted to different target dynamics (pedestrian, ground vehicle and SUAV, i.e. small unmanned aerial vehicle) and works in centralized architecture. The main idea is to explore both: classification given by heterogeneous sensors and classification obtained with our fusion module. The fusion module, presented in his paper, provides a class on each track according to track location, velocity and associated uncertainty. To model the likelihood on each class, a fuzzy approach is used considering constraints on target capability to move in the environment. Then the evidential reasoning approach based on Dempster-Shafer Theory (DST) is used to perform a time integration of this classifier output. The fusion rules are tested and compared on real data obtained with our wireless sensor network.In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of this system is evaluated in a real exercise for intelligence operation (“hunter hunt” scenario).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling and Simulation and Enabling Technologies I
To support a dismounted infantry platoon during deployment we team it with several unmanned aerial and ground vehicles (UAV and UGV, respectively). The unmanned systems integrate seamlessly into the infantry platoon, providing automated reconnaissance during movement while keeping formation as well as conducting close range reconnaissance during halt. The sensor data each unmanned system provides is continuously analyzed in real time by specialized algorithms, detecting humans in live videos of UAV mounted infrared cameras as well as gunshot detection and bearing by acoustic sensors. All recognized threats are fused into a consistent situational picture in real time, available to platoon and squad leaders as well as higher level command and control (C2) systems. This gives friendly forces local information superiority and increased situational awareness without the need to constantly monitor the unmanned systems and sensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to get the modularity and reconfigurability for sensor information fusion services in modern battle-spaces, dynamic service composition and dynamic topology determination is needed. In the current state-of-the-art, such information fusion services are composed manually and in a programmatic manner. In this paper, we consider an approach towards more automation by assuming that the topology of a solution is provided, and automatically choosing the different types and kinds of algorithms which can be used at each step. This includes the use of contextual information and techniques such as multi-arm bandits for investing the exploration and exploitation tradeoff.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to address the unique requirements of sensor information fusion in a tactical coalition environment, we are proposing a new architecture -- one based on the concept of invocations. An invocation is a combination of a software code and a piece of data, both managed using techniques from Information Centric networking. This paper will discuss limitations of current approaches, present the architecture for an invocation oriented architecture, illustrate how it works with an example scenario, and provide reasons for its suitability in a coalition environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The inherent nature of unattended sensors makes these devices most vulnerable to detection, exploitation, and denial in contested environments. Physical access is often cited as the easiest way to compromise any device or network. A new mechanism for mitigating these types of attacks developed under the Assistant Secretary of Defense for Research and Engineering, ASD(R and E) project, “Smoke Screen in Cyberspace”, was demonstrated in a live, over-the-air experiment. Smoke Screen encrypts, slices up, and disburses redundant fragments of files throughout the network. Recovery is only possible after recovering all fragments and attacking/denying one or more nodes does not limit the availability of other fragment copies in the network. This experiment proved the feasibility of redundant file fragmentation, and is the foundation for developing sophisticated methods to blacklist compromised nodes, move data fragments from risks of compromise, and forward stored data fragments closer to the anticipated retrieval point. This paper outlines initial results in scalability of node members, fragment size, file size, and performance in a heterogeneous network consisting of the Wireless Network after Next (WNaN) radio and Common Sensor Radio (CSR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To support dynamic communities of interests in coalition operations, new architectures for efficient sharing of ISR assets are needed. The use of blockchain technology in wired business environments, such as digital currency systems, offers an interesting solution by creating a way to maintain a distributed shared ledger without requiring a single trusted authority. In this paper, we discuss how a blockchain-based system can be modified to provide a solution for dynamic asset sharing amongst coalition members, enabling the creation of a logically centralized asset management system by a seamless policy-compliant federation of different coalition systems. We discuss the use of blockchain for three different types of assets in a coalition context, showing how blockchain can offer a suitable solution for sharing assets in those environments. We also discuss the limitations in the current implementations of blockchain which need to be overcome for the technology to become more effective in a decentralized tactical edge environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current decision making processes separate the intelligence tasks from the operations tasks. This creates a system that is reactive rather than proactive, leaving potential gains in the timeliness and quality of responding to a situation of interest. In this paper we will present a new optimization paradigm that combines the tasking of intelligence, surveillance, and reconnaissance (ISR) assets with the tasks and needs of operational assets. Some of the collection assets will be dedicated for one function or another, while a third category that could perform both will also be considered. We will use a scenario to demonstrate the value of the merger by presenting the impact to a number of intelligence and operations measures of performance and effectiveness (MOPS/MOES). Using this framework, mission readiness and execution assessment for a simulated humanitarian assistance/disaster relief (HADR) mission is monitored for tasks on intelligence gathering, distribution of supplies, and repair of vital lanes of transportation, during the relief effort. The results demonstrate a significant improvement to measures of performance when intelligence tasking takes operational objectives into consideration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today’s battlefield space is extremely complex, dealing with an enemy that is neither well-defined nor well-understood. Adversaries are comprised of widely-distributed, loosely-networked groups engaging in nefarious activities. Situational understanding is needed by decision makers; understanding of adversarial capabilities and intent is essential. Information needed at any time is dependent on the mission/task at hand. Information sources potentially providing mission-relevant information are disparate and numerous; they include sensors, social networks, fusion engines, internet, etc. Management of these multi-dimensional informational sources is critical. This paper will present a new approach being undertaken to answer the challenge of enhancing battlefield understanding by optimizing the utilization of available informational sources (means) to required missions/tasks as well as determining the “goodness’” of the information acquired in meeting the capabilities needed. Requirements are usually expressed in terms of a presumed technology solution (e.g., imagery). A metaphor of the “magic rabbits” was conceived to remove presumed technology solutions from requirements by claiming the “required” technology is obsolete. Instead, intelligent “magic rabbits” are used to provide needed information. The question then becomes: “WHAT INFORMATION DO YOU NEED THE RABBITS TO PROVIDE YOU?” This paper will describe a new approach called Mission-Informed Needed Information - Discoverable, Available Sensing Sources (MINI-DASS) that designs a process that builds information acquisition missions and determines what the “magic rabbits” need to provide in a manner that is machine understandable. Also described is the Missions and Means Framework (MMF) model used, the process flow utilized, the approach to developing an ontology of information source means and the approach for determining the value of the information acquired.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling and Simulation and Enabling Technologies II
This paper describes a comprehensive modeling approach for infrasonic (sub-audible acoustic) signals, which starts with an accurate representation of the source spectrum and directivity, propagates the signals through the environment, and senses and processes the signals at the receiver. The calculations are implemented within EASEE (Environmental Awareness for Sensor and Emitter Employment), which is a general software framework for modeling the impacts of terrain and weather on target signatures and the performance of a diverse range of battlefield sensing systems, including acoustic, seismic, RF, visible, and infrared. At each stage in the modeling process, the signals are described by realistic statistical distributions. Sensor performance is quantified using statistical metrics such as probability of detection and target location error. To extend EASEE for infrasonic calculations, new feature sets were created including standard octaves and one-third octaves. A library of gunfire and blast noise spectra and directivity functions was added from ERDC’s BNOISE (Blast Noise) and SARNAM (Small Arms Range Noise Assessment Model) software. Infrasonic propagation modeling is supported by extension of several existing propagation algorithms, including a basic ground impedance model, and the Green’s function parabolic equation (GFPE), which provides accurate numerical solutions for wave propagation in a refractive atmosphere. The BNOISE propagation algorithm, which is based on tables generated by a fast-field program (FFP), was also added. Finally, an extensive library of transfer functions for microphones operating in the infrasonic range were added, which interface to EASEE’s sensor performance algorithms. Example calculations illustrate terrain and atmospheric impacts on infrasonic signal propagation and the directivity characteristics of blast noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report progress toward the development of a compression schema suitable for use in the Army’s Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Sensor Information Testbed COllaberative Research Environment (SITCORE) and the Automated Online Data Repository (AODR) are significant enablers of the U.S. Army Research Laboratory (ARL)’s Open Campus Initiative and together create a highly-collaborative research laboratory and testbed environment focused on sensor data and information fusion. SITCORE creates a virtual research development environment allowing collaboration from other locations, including DoD, industry, academia, and collation facilities. SITCORE combined with AODR provides end-toend algorithm development, experimentation, demonstration, and validation. The AODR enterprise allows the U.S. Army Research Laboratory (ARL), as well as other government organizations, industry, and academia to store and disseminate multiple intelligence (Multi-INT) datasets collected at field exercises and demonstrations, and to facilitate research and development (R and D), and advancement of analytical tools and algorithms supporting the Intelligence, Surveillance, and Reconnaissance (ISR) community. The AODR provides a potential central repository for standards compliant datasets to serve as the “go-to” location for lessons-learned and reference products. Many of the AODR datasets have associated ground truth and other metadata which provides a rich and robust data suite for researchers to develop, test, and refine their algorithms. Researchers download the test data to their own environments using a sophisticated web interface. The AODR allows researchers to request copies of stored datasets and for the government to process the requests and approvals in an automated fashion. Access to the AODR requires two-factor authentication in the form of a Common Access Card (CAC) or External Certificate Authority (ECA)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent progress in the development of unmanned aerial vehicles (UAVs) has led to more and more situations in which drones like quadrocopters or octocopters pose a potential serious thread or could be used as a powerful tool for illegal activities. Therefore, counter-UAV systems are required in a lot of applications to detect approaching drones as early as possible. In this paper, an efficient and robust algorithm is presented for UAV detection using static VIS and SWIR cameras. Whereas VIS cameras with a high resolution enable to detect UAVs in the daytime in further distances, surveillance at night can be performed with a SWIR camera. First, a background estimation and structural adaptive change detection process detects movements and other changes in the observed scene. Afterwards, the local density of changes is computed used for background density learning and to build up the foreground model which are compared in order to finally get the UAV alarm result. The density model is used to filter out noise effects, on the one hand. On the other hand, moving scene parts like moving leaves in the wind or driving cars on a street can easily be learned in order to mask such areas out and suppress false alarms there. This scene learning is done automatically simply by processing without UAVs in order to capture the normal situation. The given results document the performance of the presented approach in VIS and SWIR in different situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we propose an automatic approach for tree detection and classification in registered 3-band aerial images and associated digital surface models (DSM). The tree detection results can be used in 3D city modelling and urban planning. This problem is magnified when trees are in close proximity to each other or other objects such as rooftops in the scenes. This study presents a method for locating individual trees and estimation of crown size based on local maxima from DSM accompanied by color and texture information. For this purpose, segment level classifier trained for 10 classes and classification results are improved by analyzing the class probabilities of neighbour segments. Later, the tree classes under a certain height were eliminated using the Digital Terrain Model (DTM). For the tree classes, local maxima points are obtained and the tree radius estimate is made from the vertical and horizontal height profiles passing through these points. The final tree list containing the centers and radius of the trees is obtained by selecting from the list of tree candidates according to the overlapping and selection parameters. Although the limited number of train sets are used in this study, tree classification and localization results are competitive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral images have been used in many areas including city planning, mining and military decision support systems. Hyperspectral image analysis techniques have a great potential for vegetation detection and classification with their capability to identify the spectral differences across the electromagnetic spectrum due to their ability to provide information about the chemical compositions of materials. This study introduces a vegetation detection method employing Artificial Neural Network (ANN) over hyperspectral imaging. The algorithm employed backpropagation MLP algorithm for training neural networks. The performance of ANN is improved by the joint use with Spectral Angle Mapper(SAM). The algorithm first obtains the certainty measure from ANN, following the completion of this process, every pixels’ angular distance is computed by SAM. The certainty measure is divided by angular distance. Results from ANN, SAM and Support Vector Machine (SVM) algorithms are compared and evaluated with the result of the algorithm. Limited number of training samples are used for training. The results demonstrate that joint use of ANN and SAM significantly improves classification accuracy for smaller training samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Piezoelectric motors and actuators are characterized by direct drive, fast response, high positioning resolution and high mechanical power density. These properties are beneficial for optical devices such as gimbals, optical image stabilizers and mirror angular positioners. The range of applications includes sensor pointing systems, image stabilization, laser steering and more. This paper reports on the construction, properties and operation of three types of piezo based building blocks for optical steering applications: a small gimbal and a two-axis OIS (Optical Image Stabilization) mechanism, both based on piezoelectric motors, and a flexure-assisted piezoelectric actuator for mirror angular positioning. The gimbal weighs less than 190 grams, has a wide angular span (solid angle of > 2π) and allows for a 80 micro-radian stabilization with a stabilization frequency up to 25 Hz. The OIS is an X-Y, closed loop, platform having a lateral positioning resolution better than 1 μm, a stabilization frequency up to 25 Hz and a travel of +/-2 mm. It is used for laser steering or positioning of the image sensor, based on signals from a MEMS Gyro sensor. The actuator mirror positioner is based on three piezoelectric actuation axes for tip tilt (each providing a 50 μm motion range), has a positioning resolution of 10 nm and is capable of a 1000 Hz response. A combination of the gimbal with the mirror positioner or the OIS stage is explored by simulations, indicating a <10 micro-radian stabilization capability under substantial perturbation. Simulations and experimental results are presented for a combined device facilitating both wide steering angle range and bandwidth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.