You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
17 March 2017Fast integer approximations in convolutional neural networks using layer-by-layer training
Dmitry Ilin,1 Elena Limonova,2 Vladimir Arlazarov,2 Dmitry Nikolaev3
1Smart Engines Ltd. (Russian Federation) 2Moscow Institute of Physics and Technology (Russian Federation) 3Institute for Information Transmission Problems (Russian Federation)
This paper explores method of layer-by-layer training for neural networks to train neural network, that use approximate calculations and/or low precision data types. Proposed method allows to improve recognition accuracy using standard training algorithms and tools. At the same time, it allows to speed up neural network calculations using fast-processed approximate calculations and compact data types. We consider 8-bit fixed-point arithmetic as the example of such approximation for image recognition problems. In the end, we show significant accuracy increase for considered approximation along with processing speedup.
The alert did not successfully save. Please try again later.
Dmitry Ilin, Elena Limonova, Vladimir Arlazarov, Dmitry Nikolaev, "Fast integer approximations in convolutional neural networks using layer-by-layer training," Proc. SPIE 10341, Ninth International Conference on Machine Vision (ICMV 2016), 103410Q (17 March 2017); https://doi.org/10.1117/12.2268722