19 August 1993 Application of simulated annealing to the backpropagation model improves convergence
Author Affiliations +
Abstract
The Backpropagation technique for supervised learning of internal representations in multi- layer artificial neural networks is an effective approach for solution of the gradient descent problem. However, as a primarily deterministic solution, it will attempt to take the best path to the nearest minimum, whether global or local. If a local minimum is reached, the network will fail to learn or will learn a poor approximation of the solution. This paper describes a novel approach to the Backpropagation model based on Simulated Annealing. This modified learning model is designed to provide an effective means of escape from local minima. The system is shown to converge more reliably and much faster than traditional noise insertion techniques. Due to the characteristics of the cooling schedule, the system also demonstrates a more consistent training profile.
© (1993) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Charles B. Owen, Charles B. Owen, Adel M. Abunawass, Adel M. Abunawass, "Application of simulated annealing to the backpropagation model improves convergence", Proc. SPIE 1966, Science of Artificial Neural Networks II, (19 August 1993); doi: 10.1117/12.152626; https://doi.org/10.1117/12.152626
PROCEEDINGS
8 PAGES


SHARE
RELATED CONTENT


Back to Top