Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.
We consider a setting where multiple UAVs form a team cooperating to visit multiple targets to collect rewards associated with them. The team objective is to maximize the total reward accumulated over a given time interval. Complicating factors include uncertainties regarding the locations of targets and the effectiveness of collecting rewards, differences among vehicle capabilities, and the fact that rewards are time-varying. We describe a Receding Horizon (RH) control scheme which dynamically assigns vehicles to targets and simultaneously determines associated trajectories. This scheme is based on solving a sequence of optimization problems over a planning horizon and executing them over a shorter action horizon. We also describe a simulated battlespace environment designed to test UAV team missions and to illustrate how the RH scheme can achieve optimal performance with high probability.
A key component of the Joint Air Operations (JAO) environment is the dynamic control of resources in the presence of uncertainty. This control involves the allocation of resources (e.g., different aircraft types) to prosecute targets and collect information while accounting for uncertain future events and partial, imperfect observations. The objective is to maximize the reward associated with the effective prosecution of targets, which is contingent on information collection, while minimizing loss of resources. In this paper, we extend an earlier formulation of an optimal dynamic resource allocation problem to explicitly include the dynamics of information collection and to identify the complexities involved. We then describe a simulation-based approach that was developed to solve the dynamic JAO control problem in the presence of partial and imperfect information.
A key component of a Joint Air Operation (JAO) environment is the planning and dynamic control of missions in the presence of uncertainties. This involves the assignment of resources (e.g., different aircraft types) to targets while taking into account and anticipating the effect of random future events and, subsequently, dynamic control in response to various controllable and uncontrollable events as missions are executed in a hostile and rapidly changing setting. The objective is to maximize the reward associated with targets while minimizing loss of resources. In this paper, we first formulate the problem of optimal mission assignment and identify the complexities involved due to combinatorial and stochastic characteristiscs. We then describe a discrete event simulation tool developed to model the JAO environment and all of its dynamics and stochastic elements and to provide a testbed for several methods we are developing to solve the problem of agile mission control. We describe some of these methods, including approximate dynamic programming using rollout algorithms and optimal resource allocation schemes, and present some numerical results.
Simulation modeling of complex systems is receiving increasing research attention over the past years. In this paper, we discuss the basic concepts involved in multi- resolution simulation modeling of complex stochastic systems. We argue that, in many cases, using the average over all available high-resolution simulation results as the input to subsequent low-resolution modules is inappropriate and may lead to erroneous final results. Instead high- resolution output data should be classified into groups that match underlying patterns or features of the system behavior before sensing group averages to the low-resolution modules. We propose high-dimensional data clustering as a key interfacing component between simulation modules with different resolutions and use unsupervised learning schemes to recover the patterns for the high-resolution simulation results. We give some examples to demonstrate our proposed scheme.
Resource allocation problems arise in application domains such as logistics, networking, manufacturing, and C4I systems. The discrete nature of resources to be allocated makes such problems combinatorially complex. In addition, uncertainties in the times when resources are requested and relinquished introduce additional complexities often necessitating the use of simulation for modeling and analysis purposes. In this paper, we present two approaches for solving such problems, the first based on ordinal optimization and the second on the idea of replacing the original discrete allocation problem by a `surrogate model' involving a continuous allocation problem. The latter is simpler to solve through gradient-based techniques and can be shown to recover the solutions of the original problem. Concurrent simulation is used to estimate the gradients required in this approach, leading to extremely fast solutions for many problems in practice.
Simulation of large complex systems for the purpose of evaluating performance and exploring alternatives is a computationally slow process, currently still out of the domain of real-time applications. To overcome this limitation, one approach is to obtain a 'metamodel' of the system, i.e., a 'surrogate' model which is computationally much faster than the simulator and yet is just as accurate. We describe the use of Neural Networks (NN) as metamodeling devices which may be trained to mimic the input-output behavior of a simulation model. In the case of discrete event system (DES) models, the process of collecting the simulation data needed to obtain a metamodel can also be significantly enhanced through concurrent estimation techniques which enable the extraction of information from a single simulation that would otherwise require multiple repeated simulations. We will present applications of two benchmark problems in the C3I domain: A tactical electronic ground-based radar sites; and an aircraft refueling and maintenance system as a component of a typical Air Tasking Order. A comparative analysis with alternative metamodeling approaches indicates that a NN captures significant nonlinearities in the behavior of complex systems that may otherwise not be accurately modeled.
Simulation of large complex systems for the purpose of evaluating performance and exploring alternatives is a computationally slow process, currently still out of the domain of real-time applications. This paper overviews advances in three directions aimed at overcoming this limitation. First, based on developments in the theory of discrete event systems, concurrent simulation enables the extraction of information from a single simulation that would otherwise require multiple repeated simulations. This effectively provides simulation speedups of possibly orders of magnitude. A second direction attempts to use simulation for the purpose of obtaining a 'metamodel' of the actual system, i.e., an approximate 'surrogate' model which is computationally very fast, yet accurate. We specifically discuss the use of neural networks as metamodeling devices which may be trained through simulation. Finally, hierarchical simulation provides yet another means for speedup, a major challenge being the preservation of fidelity between hierarchical levels. In practice, using the statistical average of a high resolution level simulator output as the input for a lower resolution level causes significant loss of stochastic fidelity. We present an approach in which we cluster the high resolution simulation output into 'path bundles' as the input for the lower resolution level. The paper includes applications of these new directions to areas such as combat simulation and design of C3I systems.