15 March 1994 Multiresolution neural networks
Author Affiliations +
Abstract
Supervised learning generally requires the induction of an input- output map from a training set of labeled examples. This induced map is then used in an attempt to correctly classify input features that were not part of the original training set. Current backpropagation neural networks models, however, suffer several shortcomings. These are due to the difficulty in estimating the number of necessary hidden units, the inherent problems of gradient descent computations and the hyperplane classification implicit in most network models. We will present a mathematically sound framework for neural networks simulations in the form of multiresolution analysis. In these multiresolution neural networks (MRNNs), the neuron activation functions are chosen from a set of wavelet basis functions, in such a way that a resulting network represents a wavelet expansion of the underlying input- output map. The networks are constructed by using a modified recursive partitioning algorithm which we call receptive field partitioning (RFP). The RFP algorithm constructs a network by localizing a region of high error and adding nodes whose activation function is taken from a higher resolution space than the current local nodes, and whose support falls within the region of high error. The combination of MRNNs and the RFP algorithm provides a solution to the problems associated with the backpropagation networks.
© (1994) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Toufic I. Boubez, Toufic I. Boubez, Richard L. Peskin, Richard L. Peskin, } "Multiresolution neural networks", Proc. SPIE 2242, Wavelet Applications, (15 March 1994); doi: 10.1117/12.170063; https://doi.org/10.1117/12.170063
PROCEEDINGS
12 PAGES


SHARE
Back to Top