The basic filtering problem in signal processing is to operate on an observed signal to estimate a desired signal. The immediate difficulty is obvious: how to construct the filter when all one has is the observed signal. One might try some naïve approach like using a standard low-pass filter, but why should that produce a good result? The proper formulation of the problem, as laid down by Norbert Wiener in the 1930s, is to treat the observed and desired signals as random functions that are jointly probabilistically related, in which case one can find a filter that produces the best estimate of the desired random signal based on the observed random signal, where optimality is relative to some probabilistic error criterion. When an actual signal is observed, the optimal filter is applied. It makes no sense to enquire about the accuracy of the filter relative to any single observation, since if we knew the desired signal that led to the observation, a filter would not be needed. It would be like processing data in the absence of a criterion beyond the data itself and asking if the processing is beneficial. This would be a form of pre-scientific radical empiricism. As put by Hans Reichenbach (Reichenbach, 1971), “If knowledge is to reveal objective relations of physical objects, it must include reliable predictions. A radical empiricism, therefore, denies the possibility of knowledge.” Since knowledge is our goal and optimal operator design is our subject, we begin by defining a random function and considering the basic properties of such functions, including the calculus of random functions.
Online access to SPIE eBooks is limited to subscribing institutions.