Paper
12 April 2010 Multiple optimal learning factors for feed-forward networks
Sanjeev S. Malalur, Michael T. Manry
Author Affiliations +
Abstract
A batch training algorithm for feed-forward networks is proposed which uses Newton's method to estimate a vector of optimal learning factors, one for each hidden unit. Backpropagation, using this learning factor vector, is used to modify the hidden unit's input weights. Linear equations are then solved for the network's output weights. Elements of the new method's Gauss-Newton Hessian matrix are shown to be weighted sums of elements from the total network's Hessian. In several examples, the new method performs better than backpropagation and conjugate gradient, with similar numbers of required multiplies. The method performs as well as or better than Levenberg-Marquardt, with several orders of magnitude fewer multiplies due to the small size of its Hessian.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Sanjeev S. Malalur and Michael T. Manry "Multiple optimal learning factors for feed-forward networks", Proc. SPIE 7703, Independent Component Analyses, Wavelets, Neural Networks, Biosystems, and Nanoengineering VIII, 77030F (12 April 2010); https://doi.org/10.1117/12.850873
Lens.org Logo
CITATIONS
Cited by 19 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Remote sensing

Chemical elements

Image processing

Matrices

Statistical analysis

Computer architecture

Back to Top