In this paper, we are interested in exploiting the heterogeneity of a robotic network made of ground and aerial agents to sense multiple targets in a cluttered environment. Maintaining wireless communication on this type of networks is fundamentally important specially for cooperative purposes. The proposed heterogeneous network consists of ground sensors, e.g., OctoRoACHes, and aerial routers, e.g., quadrotors. Adaptive potential field methods are used to coordinate the ground mobile sensors. Moreover, a reward function for the aerial mobile wireless routers is formulated to guarantee communication coverage among the ground sensors and a fixed base station. A sub-optimal controller is proposed based on an approximate control policy iteration technique. Simulation results of a case study are presented to illustrate the proposed methodology.
This paper presents a hybrid approximate dynamic programming (ADP) method for a hybrid dynamic system
(HDS) optimal control problem, that occurs in many complex unmanned systems which are implemented via a
hybrid architecture, regarding robot modes or the complex environment. The HDS considered in this paper is
characterized by a well-known three-layer hybrid framework, which includes a discrete event controller layer, a
discrete-continuous interface layer, and a continuous state layer. The hybrid optimal control problem (HOCP)
is to nd the optimal discrete event decisions and the optimal continuous controls subject to a deterministic
minimization of a scalar function regarding the system state and control over time. Due to the uncertainty
of environment and complexity of the HOCP, the cost-to-go cannot be evaluated before the HDS explores the
entire system state space; as a result, the optimal control, neither continuous nor discrete, is not available ahead
of time. Therefore, ADP is adopted to learn the optimal control while the HDS is exploring the environment,
because of the online advantage of ADP method. Furthermore, ADP can break the curses of dimensionality
which other optimizing methods, such as dynamic programming (DP) and Markov decision process (MDP), are
facing due to the high dimensions of HOCP.
The problem of surveilling moving targets using mobile sensor agents (MSAs) is applicable to a variety of
fields, including environmental monitoring, security, and manufacturing. Several authors have shown that the
performance of a mobile sensor can be greatly improved by planning its motion and control strategies based
on its sensing objectives. This paper presents an information potential approach for computing the MSAs'
motion plans and control inputs based on the feedback from a modified particle filter used for tracking moving
targets. The modified particle filter, as presented in this paper implements a new sampling method (based
on supporting intervals of density functions), which accounts for the latest sensor measurements and adapts,
accordingly, a mixture representation of the probability density functions (PDFs) for the target motion. It is
assumed that the target motion can be modeled as a semi-Markov jump process, and that the PDFs of the
Markov parameters can be updated based on real-time sensor measurements by a centralized processing unit
or MSAs supervisor. Subsequently, the MSAs supervisor computes an information potential function that is
communicated to the sensors, and used to determine their individual feedback control inputs, such that sensors
with bounded field-of-view (FOV) can follow and surveil the target over time.