26 June 2017 CPU architecture for a fast and energy-saving calculation of convolution neural networks
Author Affiliations +
Proceedings Volume 10335, Digital Optical Technologies 2017; 103351M (2017) https://doi.org/10.1117/12.2270282
Event: SPIE Digital Optical Technologies, 2017, Munich, Germany
Abstract
One of the most difficult problem in the use of artificial neural networks is the computational capacity. Although large search engine companies own specially developed hardware to provide the necessary computing power, for the conventional user only remains the state of the art method, which is the use of a graphic processing unit (GPU) as a computational basis. Although these processors are well suited for large matrix computations, they need massive energy. Therefore a new processor on the basis of a field programmable gate array (FPGA) has been developed and is optimized for the application of deep learning. This processor is presented in this paper. The processor can be adapted for a particular application (in this paper to an organic farming application). The power consumption is only a fraction of a GPU application and should therefore be well suited for energy-saving applications.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Florian J. Knoll, Michael Grelcke, Vitali Czymmek, Tim Holtorf, Stephan Hussmann, "CPU architecture for a fast and energy-saving calculation of convolution neural networks", Proc. SPIE 10335, Digital Optical Technologies 2017, 103351M (26 June 2017); doi: 10.1117/12.2270282; https://doi.org/10.1117/12.2270282
PROCEEDINGS
9 PAGES


SHARE
Back to Top