1 July 1992 Neural nets for massively parallel optimization
Author Affiliations +
Proceedings Volume 1710, Science of Artificial Neural Networks; (1992); doi: 10.1117/12.140088
Event: Aerospace Sensing, 1992, Orlando, FL, United States
Abstract
To apply massively parallel processing systems to the solution of large scale optimization problems it is desirable to be able to evaluate any function f(z), z (epsilon) Rn in a parallel manner. The theorem of Cybenko, Hecht Nielsen, Hornik, Stinchcombe and White, and Funahasi shows that this can be achieved by a neural network with one hidden layer. In this paper we address the problem of the number of nodes required in the layer to achieve a given accuracy in the function and gradient values at all points within a given n dimensional interval. The type of activation function needed to obtain nonsingular Hessian matrices is described and a strategy for obtaining accurate minimal networks presented.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Laurence C. W. Dixon, David Mills, "Neural nets for massively parallel optimization", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140088; https://doi.org/10.1117/12.140088
PROCEEDINGS
10 PAGES


SHARE
KEYWORDS
Neural networks

Artificial neural networks

Matrices

Data modeling

Tolerancing

Parallel processing

Array processing

Back to Top