KEYWORDS: Analytics, Network architectures, Space reconnaissance, Sensors, Information technology, Failure analysis, Databases, Data storage, Data processing, Computer architecture
By extending the Software Defined Networking (SDN), the Distributed Analytics and Information Sciences International Technology Alliance (DAIS ITA) https://dais-ita.org/pub has introduced a new architecture called Software Defined Coalitions (SDC) to share communication, computation, storage, database, sensor and other resources among coalition forces. Reinforcement learning (RL) has been shown to be effective for managing SDC. Due to link failure or operational requirements, SDC may become fragmented and reconnected again over time. This paper shows how data and knowledge acquired in the disconnected SDC domains during fragmentation can be used via transfer learning (TL) to significantly enhance the RL after fragmentation ends. Thus, the combined RL-TL technique enables efficient management and control of SDC despite fragmentation. The technique also enhances the robustness of the SDC architecture for supporting distributed analytics services.
Distributed software-defined networking (SDN), which consists of multiple inter-connected network domains and each managed by one SDN controller, is an emerging networking architecture that offers balanced centralized/distributed control. Under such networking paradigm, resource management among various domains (e.g., optimal resource allocation) can be extremely challenging. This is because many tasks posted to the network require resources (e.g., CPU, memory, bandwidth, etc.) from different domains, where cross-domain resources are correlated, e.g., their feasibility depends on the existence of a reliable communication channel connecting them. To address this issue, we employ the reinforcement learning framework, targeting to automate this resource management and allocation process by proactive learning and interactions. Specifically, we model this issue as an MDP (Markov Decision Process) problem with different types of reward functions, where our objective is to minimize the average task completion time. Regarding this problem, we investigate the scenario where the resource status among controllers is fully synchronized. Under such scenario, the SDN controller has complete knowledge of the resource status of all domains, i.e., resource changes upon any policies are directly observable by controllers, for which Q-learning-based strategy is proposed to approach the optimal solution.
With the emergence of Internet of Things (IoT) and edge computing applications, data is often generated by sensors and end users at the network edge, and decisions are made using these collected data. Edge devices often require cloud services in order to perform intensive inference tasks. Consequently, the inference of deep neural network (DNN) model is often partitioned between the edge and the cloud. In this case, the edge device performs inference up to an intermediate layer of the DNN, and offloads the output features to the cloud for the inference of the remaining of the network. Partitioning a DNN can help to improve energy efficiency but also rises some privacy concerns. The cloud platform can recover part of the raw data using intermediate results of the inference task. Recently, studies have also quantified an information theoretic trade-off between compression and prediction in DNNs. In this paper, we conduct a simple experiment to understand to which extent is it possible to reconstruct the raw data given the output of an intermediate layer, in other words, to which extent do we leak private information when sending the output of an intermediate layer to the cloud. We also present an overview of mutual-information based studies of DNN, to help understand information leakage and some potential ways to make distributed inference more secure.
With the proliferation of smart devices, it is increasingly important to exploit their computing, networking, and storage resources for executing various computing tasks at scale at mobile network edges, bringing many benefits such as better response time, network bandwidth savings, and improved data privacy and security. A key component in enabling such distributed edge computing is a mechanism that can flexibly and dynamically manage edge resources for running various military and commercial applications in a manner adaptive to the fluctuating demands and resource availability. We present methods and an architecture for the edge resource management based on machine learning techniques. A collaborative filtering approach combined with deep learning is proposed as a means to build the predictive model for applications’ performance on resources from previous observations, and an online resource allocation architecture utilizing the predictive model is presented. We also identify relevant research topics for further investigation.
In order to address the unique requirements of sensor information fusion in a tactical coalition environment, we are proposing a new architecture -- one based on the concept of invocations. An invocation is a combination of a software code and a piece of data, both managed using techniques from Information Centric networking. This paper will discuss limitations of current approaches, present the architecture for an invocation oriented architecture, illustrate how it works with an example scenario, and provide reasons for its suitability in a coalition environment.
KEYWORDS: Sensor networks, Sensors, Mathematical modeling, Network architectures, Data modeling, Wireless communications, Systems modeling, Chemical analysis, Relays, Data communications
In this paper, a novel mission-oriented sensor network architecture for military applications is proposed involving
multiple sensing missions with varying quality of information (QoI) requirements. A new concept of mission QoI
satisfaction index indicating the degree of satisfaction for any mission in the network is introduced. Furthermore,
the 5WH (why, when, where, what, who, how) principle on the operational context of information is extended
to capture the changes of QoI satisfaction indexes for mission admission and completion. These allow modeling
the whole network as a "black box". With system inputs including the QoI requirements of the existing and
newly arriving missions and output the QoI satisfaction index, the new concept of sensor network capacity is
introduced and mathematically described. The QoI-centric sensor network capacity is a key element of the
proposed architecture and aids controlling of admission of newly arriving missions in accordance with the QoI
needs of all (existing and newly admitted missions). Finally, the proposed architecture and its key design
parameters are illustrated through an example of a sensor network deployed for detecting the presence of a
hazardous, chemical material.
This paper examines the practical challenges in the application of the distributed network utility maximization
(NUM) framework to the problem of resource allocation and sensor device adaptation in a mission-centric wireless
sensor network (WSN) environment. By providing rich (multi-modal), real-time information about a variety of
(often inaccessible or hostile) operating environments, sensors such as video, acoustic and short-aperture radar
enhance the situational awareness of many battlefield missions. Prior work on the applicability of the NUM
framework to mission-centric WSNs has focused on tackling the challenges introduced by i) the definition of
an individual mission's utility as a collective function of multiple sensor flows and ii) the dissemination of an
individual sensor's data via a multicast tree to multiple consuming missions. However, the practical application
and performance of this framework is influenced by several parameters internal to the framework and also by
implementation-specific decisions. This is made further complex due to mobile nodes. In this paper, we use
discrete-event simulations to study the effects of these parameters on the performance of the protocol in terms
of speed of convergence, packet loss, and signaling overhead thereby addressing the challenges posed by wireless
interference and node mobility in ad-hoc battlefield scenarios. This study provides better understanding of the
issues involved in the practical adaptation of the NUM framework. It also helps identify potential avenues of
improvement within the framework and protocol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.