30 March 2000 Minimum number of hidden neurons does not necessarily provide the best generalization
Author Affiliations +
Abstract
The quality of a feedforward neural network that allows it to associate data not used in training is called generalization. A common method of creating the desired network is for the user to select the network architecture and allowing a training algorithm to evolve the synaptic weights between the neurons. A popular belief is that the network with the fewest number of hidden neurons that correctly learns a sufficient training set is a network with better generalization. This paper will contradict that belief. The optimization of generalization requires that the network not assume information that does not exist in the training data. Unfortunately, a network with the minimum number of hidden neurons may require assumptions of information that does not exist. The network then skews the surface that maps the input space to the output space in order to accommodate the minimum architecture which then sacrifices generalization.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jason M. Kinser, Jason M. Kinser, } "Minimum number of hidden neurons does not necessarily provide the best generalization", Proc. SPIE 4055, Applications and Science of Computational Intelligence III, (30 March 2000); doi: 10.1117/12.380567; https://doi.org/10.1117/12.380567
PROCEEDINGS
7 PAGES


SHARE
RELATED CONTENT

Evolving neural network pattern classifiers
Proceedings of SPIE (October 28 1993)
Novel RAM-based neural networks for object recognition
Proceedings of SPIE (October 30 1996)
Syntactic neural network for character recognition
Proceedings of SPIE (July 31 1992)

Back to Top