Bayesian state estimators, unlike maximum likelihood estimators, generate a state estimate and a probability density function (PDF) representing the predicted uncertainty of the estimate. While it is relatively straightforward to determine the accuracy of the state estimate, verifying the accuracy of the predicted uncertainty is more difficult, especially when the uncertainty is time-varying. In this work, we review two prior techniques that verify the predicted uncertainty of an estimator. We show that each technique verifies the accuracy of the estimator’s uncertainty by checking if normalized state estimates follow a chi-squared distribution. If these normalized samples do not follow the correct chi-squared distribution, one can conclude that the predicted uncertainty is unreliable. In this work, we propose to use goodness-of-fit tests to determine if normalized state estimates do not follow the correct distribution. Our results demonstrate that one of the prior techniques achieves superior performance if the true uncertainty is Gaussian. When the true uncertainty is non-Gaussian, however, our proposed goodness-of-fit method demonstrates higher discriminative power.
Clark N. Taylor and Shane Lubold, "Verifying the predicted uncertainty of Bayesian estimators," Proc. SPIE 10645, Geospatial Informatics, Motion Imagery, and Network Analytics VIII, 106450E (Presented at SPIE Defense + Security: April 16, 2018; Published: 27 April 2018); https://doi.org/10.1117/12.2304954.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the proceedings. They include the speaker's narration with video of the slides and animations. Most include full-text papers. Interactive, searchable transcripts and closed captioning are now available for 2018 presentations, with transcripts for prior recordings added daily.
Search our growing collection of more than 16,000 conference presentations, including many plenaries and keynotes.