There has recently been an explosion of interest in graph neural networks, which extend the application of neural networks to data recorded on graphs. Most of the work has focused on static tasks, where a feature vector is available at each node in a graph, often with an associated label, and the goal is to perform node classification or regression, graph classification, or link prediction. Some work has recently emerged addressing the processing of sequences of data (time-series) on graphs using methods based on neural networks; most strategies involve combining Long Short-Term Memory units (LSTMs) or Gated Recurrent Units (GRUs) with graph convolution. However most of these methods process the observed graph as the ground truth, thereby failing to account for the uncertainty associated with graph structure in the learning task. As a remedy to this issue, in the recently proposed Bayesian graph convolutional neural networks, the provided graph is treated as a noisy observation of a true underlying graph or a realization of a random graph model. We can thus model uncertainty in the identification of relationships between nodes in the graph. We specify a joint posterior probability on the graph and the weights of the neural network and then perform inference through a combination of variational inference and Monte Carlo sampling. In this paper, we extend the Bayesian framework to address a regression task for time-series on graphs.
In this paper we propose a Superpositional Marginalized δ-GLMB (SMδ-GLMB) filter for multi-target tracking and we provide bootstrap and particle flow particle filter implementations. Particle filter implementations of the marginalized δ-GLMB filter are computationally demanding. As a first contribution we show that for the specific case of superpositional observation models, a reduced complexity update step can be achieved by employing a superpositional change of variables. The resulting SMδ-GLMB filter can be readily implemented using the unscented Kalman filter or particle filtering methods.
As a second contribution, we employ particle flow to produce a measurement-driven importance distribution that serves as a proposal in the SMδ-GLMB particle filter. In high-dimensional state systems or for highly- informative observations the generic particle filter often suffers from weight degeneracy or otherwise requires a prohibitively large number of particles. Particle flow avoids particle weight degeneracy by guiding particles to regions where the posterior is significant. Numerical simulations showcase the reduced complexity and improved performance of the bootstrap SMδ-GLMB filter with respect to the bootstrap Mδ-GLMB filter. The particle flow SMδ-GLMB filter further improves the accuracy of track estimates for highly informative measurements.
Particle ﬁlter and Gaussian mixture implementations of random ﬁnite set ﬁlters have been proposed to tackle the issue of jointly estimating the number of targets and their states. The Gaussian mixture PHD (GM-PHD) ﬁlter has a closed-form expression for the PHD for linear and Gaussian target models, and extensions using the extended Kalman ﬁlter or unscented Kalman Filter have been developed to allow the GM-PHD ﬁlter to accommodate mildly nonlinear dynamics. Errors resulting from linearization or model mismatch are unavoidable. A particle ﬁlter implementation of the PHD ﬁlter (PF-PHD) is more suitable for nonlinear and non-Gaussian target models. The particle ﬁlter implementations are much more computationally expensive and performance can suﬀer when the proposal distribution is not a good match to the posterior. In this paper, we propose a novel implementation of the PHD ﬁlter named the Gaussian particle ﬂow PHD ﬁlter (GPF-PHD). It employs a bank of particle ﬂow ﬁlters to approximate the PHD; these play the same role as the Gaussian components in the GM-PHD ﬁlter but are better suited to non-linear dynamics and measurement equations. Using the particle ﬂow ﬁlter allows the GPF-PHD ﬁlter to migrate particles to the dense regions of the posterior, which leads to higher eﬃciency than the PF-PHD. We explore the performance of the new algorithm through numerical simulations.
We develop a distributed cardinalized probability hypothesis density (CPHD) filter that can be deployed in a sensor network to process the measurements of multiple sensors that make conditionally independent measurements. In contrast to the majority of the related work, which involves performing local filter updates and then exchanging data to fuse the local intensity functions and cardinality distributions, we strive to approximate the update step that a centralized multi-sensor CPHD filter would perform.
We propose, for the super-positional sensor scenario, a hybrid between the multi-Bernoulli filter and the cardinalized probability hypothesis density (CPHD) filter. We use a multi-Bernoulli random finite set (RFS) to model existing targets and we use an independent and identically distributed cluster (IIDC) RFS to model newborn targets and targets with low probability of existence. Our main contributions are providing the update equations of the hybrid filter and identifying computationally tractable approximations. We achieve this by defining conditional probability hypothesis densities (PHDs), where the conditioning is on one of the targets having a specified state. The filter performs an approximate Bayes update of the conditional PHDs. In parallel, we perform a cardinality update of the IIDC RFS component in order to estimate the number of newborn targets. We provide an auxiliary particle filter based implementation of the proposed filter and compare it with CPHD and multi-Bernoulli filters in a simulated multitarget tracking application
The ensemble Kalman filter relies on a Gaussian approximation being a reasonably accurate representation of the filtering distribution. Reich recently introduced a Gaussian mixture ensemble transform filter which can address scenarios where the prior can be modeled using a Gaussian mixture. Reichs derivation is suitable for a scalar measurement or a vector of uncorrelated measurements. We extend the derivation to the case of vector observations with arbitrary correlations. We illustrate through numerical simulation that implementation is challenging, because the filter is prone to instability.