Paper
1 August 1990 Theory of networks for learning
Barbara Moore
Author Affiliations +
Abstract
Many neural networks are constructed to learn an input-output mapping from examples. This problem is related to classical approximation techniques including regularization theory. Regularization is equivalent to a class of threelayer networks which we call regularization networks or Hyper Basis Functions. The strong theoretical foundation of regularization networks provides us with a better understanding of why they work and how to best choose a specific network and parameters for a given problem. Classical regularization theory can be extended in order to improve the quality of learning performed by Hyper Basis Functions. For example the centers of the basis functions and the norm weights can be optimized. Many Radial Basis Functions often used for function interpolation are provably Hyper Basis Functions. 1.
© (1990) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Barbara Moore "Theory of networks for learning", Proc. SPIE 1294, Applications of Artificial Neural Networks, (1 August 1990); https://doi.org/10.1117/12.21153
Lens.org Logo
CITATIONS
Cited by 22 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Artificial neural networks

Neural networks

Artificial intelligence

Associative arrays

Evolutionary algorithms

Algorithms

Binary data

Back to Top