This work investigates the behavior of a distributed team of agents on a dynamic distributed task allocation
problem. Previous work finds that a distributed decision making process can effectively assign tasks appropriately
to team members even when agents have only local information. We study this problem in a distributed
environment in which agents can move, thus causing local neighborhoods to change over time. Results indicate
that a higher level of adaptation is clearly required in the dynamic environment. Despite the increased difficulty,
the distributed team is able achieve comparable behavior in both static and dynamic environments.
We examine the use of local decentralized decision-making methods for solving the problem of resource allocation.
Specifically, we study the problem of frequency coverage given a team of cooperating receivers. The decision
making process is decentralized in that receivers can only communicate locally. We use an extension of the
minority game approach to allocate receivers to current frequency coverage tasks.
This work investigates the efforts behind defining a classification system for multi-agent search and tracking problems,
specifically those based on relatively small numbers of agents. The pack behavior search and tracking classification
(PBSTC) we define as mappings to animal pack behaviors that regularly perform activities similar to search and
tracking problems, categorizing small multi-agent problems based on these activities. From this, we use evolutionary
computation to evolve goal priorities for a team of cooperating agents. Our goal priorities are trained to generate
candidate parameter solutions for a search and tracking problem in an emitter/sensor scenario. We identify and isolate
several classifiers from the evolved solutions and how they reflect on the agent control systems's ability in the
simulation to solve a task subset of the search and tracking problem. We also isolate the types of goal vector parameters
that contribute to these classified behaviors, and categorize the limitations from those parameters in these scenarios.
Improvements in sensor capabilities have driven the need for automated sensor allocation and management systems. Such systems provide a penalty-free test environment and valuable input to human operators by offering candidate solutions. These abilities lead, in turn, to savings in manpower and time. Determining an optimal team of cooperating sensors for military operations is a challenging task.
There is a tradeoff between the desire to decrease the cost and the need to increase the sensing capabilities of a sensor suite. This work focuses on unattended ground sensor networks consisting of teams of small, inexpensive sensors. Given a possible configuration of enemy radar, our goal isto generate sensor suites that monitor as many enemy radar as possible while minimizing cost. In previous work, we have shown that genetic algorithms (GAs) can be used to evolve successful teams of sensors for this problem. This work extends our previous work in two ways: we use an improved simulator containing a more accurate model of radar and sensor capabilities for out fitness evaluations and we introduce two new genetic operators, insertion and deletion, that are expected to improve the GA's fine tuning abilities.
Empirical results show that our GA approach produces near optimal results under a variety of enemy radar configurations using sensors with varying capabilities. Detection percentage remains stable
regardless of changes in the enemy radar placements.
Conference Committee Involvement (1)
Evolutionary and Bio-Inspired Computation: Theory and Applications IV