Translator Disclaimer
1 July 1992 Feedforward neural nets and one-dimensional representation
Author Affiliations +
Feedforward nets can be trained to represent any continuous function, and training is equivalent to solving a nonlinear optimization problem. Unfortunately, it frequently leads to an error function with a Hessian matrix that is effectively singular at the solution. Traditional quadratic based optimization algorithms do not perform superlinearly on functions with a singular Hessian, but results on univariate functions show that even so they are more efficient and reliable than backpropagation. A feedforward net is used to represent a superposition of its own sigmoid activation function. The results identify some conditions for which the Hessian of the error function is effectively singular.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Laurence C. W. Dixon and David Mills "Feedforward neural nets and one-dimensional representation", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992);

Back to Top