There has recently been an explosion of interest in graph neural networks, which extend the application of neural networks to data recorded on graphs. Most of the work has focused on static tasks, where a feature vector is available at each node in a graph, often with an associated label, and the goal is to perform node classification or regression, graph classification, or link prediction. Some work has recently emerged addressing the processing of sequences of data (time-series) on graphs using methods based on neural networks; most strategies involve combining Long Short-Term Memory units (LSTMs) or Gated Recurrent Units (GRUs) with graph convolution. However most of these methods process the observed graph as the ground truth, thereby failing to account for the uncertainty associated with graph structure in the learning task. As a remedy to this issue, in the recently proposed Bayesian graph convolutional neural networks, the provided graph is treated as a noisy observation of a true underlying graph or a realization of a random graph model. We can thus model uncertainty in the identification of relationships between nodes in the graph. We specify a joint posterior probability on the graph and the weights of the neural network and then perform inference through a combination of variational inference and Monte Carlo sampling. In this paper, we extend the Bayesian framework to address a regression task for time-series on graphs.