Presentation + Paper
10 May 2019 Super-convergence: very fast training of neural networks using large learning rates
Author Affiliations +
Abstract
In this paper, we describe a phenomenon, which we named “super-convergence”, where neural networks can be trained an order of magnitude faster than with standard training methods. The existence of super-convergence is relevant to understanding why deep networks generalize well. One of the key elements of super-convergence is training with one learning rate cycle and a large maximum learning rate. A insight that allows super-convergence training is that large learning rates regularize the training, hence requiring a reduction of all other forms of regularization in order to preserve an optimal regularization balance. We also derive a simplification of the Hessian Free optimization method to compute an estimate of the optimal learning rate. Experiments demonstrate super-convergence for Cifar-10/100, MNIST and Imagenet datasets, and resnet, wide-resnet, densenet, and inception architectures. In addition, we show that super-convergence provides a greater boost in performance relative to standard training when the amount of labeled training data is limited. The architectures and code to replicate the figures in this paper are available at github.com/lnsmith54/super-convergence.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Leslie N. Smith and Nicholay Topin "Super-convergence: very fast training of neural networks using large learning rates", Proc. SPIE 11006, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 1100612 (10 May 2019); https://doi.org/10.1117/12.2520589
Lens.org Logo
CITATIONS
Cited by 187 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Stochastic processes

Algorithms

Applied research

Artificial intelligence

RELATED CONTENT

Roadmap of AlphaGo to AlphaStar: Problems and challenges
Proceedings of SPIE (November 10 2022)
Stochastic neural nets and vision
Proceedings of SPIE (March 01 1991)
Theory of networks for learning
Proceedings of SPIE (August 01 1990)

Back to Top