Translator Disclaimer
Chapter 5:
Error Bounds for Maximum-Entropy Estimates

5.1 Introduction

In a parametric, or model-based, estimation approach, deriving error bounds on density estimates would be straightforward; the uncertainty of the parameter estimates would fully determine the uncertainty in the overall density estimate. However, in the absence of a parametric model, there is no well-defined approach for finding the goodness of a density estimator. Consequently, we now detail the construction of error bounds and confidence intervals for ME density estimates.

Key to our approach is the representation of densities by finite-order exponential families. Using exponential families as probability models of uncertain systems offers many advantages. Often, if two independent random variables have densities belonging to the same exponential family, their joint density will also be a member of this family. As such, exponential family models are easily updated as new information becomes available. Moreover, if f(x) is infinitely differentiable, then, modulo certain pathological cases, it may be approximated with arbitrary precision by some density from an exponential family. In this light, the ME method is highly desirable; its estimates are always members of exponential families.

The linearly independent statistics sequence used to define the exponential family may include the sequence of monomials {xk} and the sequences of orthogonal polynomials. We assume the unknown density is from a canonical exponential family of an unknown but finite orderm, and only n moments on n linearly independent statistics {Tk(x)} are known. We consider two cases: mn and m > n. In the first case, we have an overdetermined model. In the second underdetermined case, even when m is known, there may still be an infinite number of densities from an m-parameter exponential family having the same set of prescribed n moments.

Online access to SPIE eBooks is limited to subscribing institutions.

Back to Top