Consider three commonplace imaging scenarios: (1) a document image is sent through various digital operations, say, scanning, printing, copying, and then faxing; (2) an image is compressed to reduce the number of bytes for storage or transmission; and (3) geometric features are computed from an image to characterize the degree to which an industrial process is in control. In each scenario, characterization of the image processing cannot be based on the effects of processing a single image, or on the effects of processing any finite number of images. For the document image, if one wishes to design a filter that will restore it, then that filter needs to be designed in accordance with how the various stages of image processing affect the class of images to be filtered, in particular, how the processing affects the probabilistic distribution of the image class. In the case of the compressed image, if one wishes to measure the degree of compression or to design a decompression filter, then both the compression and goodness of the restoration filter must be evaluated relative to the class of images to be compressed and decompressed. Any particular image will likely occur very rarely and the system must be designed and evaluated probabilistically. Finally, for feature generation, image observations will vary, features will be random variables, and classification accuracy will depend on the joint distribution of the features. At their root, image and signal processing are applied disciplines within the domain of random processes.
Online access to SPIE eBooks is limited to subscribing institutions.