24 August 2017 Memory-optimal neural network approximation
Author Affiliations +
Abstract
We summarize the main results of a recent theory—developed by the authors—establishing fundamental lower bounds on the connectivity and memory requirements of deep neural networks as a function of the complexity of the function class to be approximated by the network. These bounds are shown to be achievable. Specifically, all function classes that are optimally approximated by a general class of representation systems—so-called affine systems—can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, shearlets, ridgelets, α-shearlets, and more generally α-molecules. This result elucidates a remarkable universality property of deep neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks which provide close-to-optimal approximation rates at minimal connectivity. Moreover, stochastic gradient descent is found to actually learn approximations that are sparse in the representation system optimally sparsifying the function class the network is trained on.
Conference Presentation
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Helmut Bölcskei, Helmut Bölcskei, Philipp Grohs, Philipp Grohs, Gitta Kutyniok, Gitta Kutyniok, Philipp Petersen, Philipp Petersen, } "Memory-optimal neural network approximation", Proc. SPIE 10394, Wavelets and Sparsity XVII, 103940Q (24 August 2017); doi: 10.1117/12.2272490; https://doi.org/10.1117/12.2272490
PROCEEDINGS
12 PAGES + PRESENTATION

SHARE
Back to Top