26 June 2017 CPU architecture for a fast and energy-saving calculation of convolution neural networks
Author Affiliations +
Abstract
One of the most difficult problem in the use of artificial neural networks is the computational capacity. Although large search engine companies own specially developed hardware to provide the necessary computing power, for the conventional user only remains the state of the art method, which is the use of a graphic processing unit (GPU) as a computational basis. Although these processors are well suited for large matrix computations, they need massive energy. Therefore a new processor on the basis of a field programmable gate array (FPGA) has been developed and is optimized for the application of deep learning. This processor is presented in this paper. The processor can be adapted for a particular application (in this paper to an organic farming application). The power consumption is only a fraction of a GPU application and should therefore be well suited for energy-saving applications.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Florian J. Knoll, Florian J. Knoll, Michael Grelcke, Michael Grelcke, Vitali Czymmek, Vitali Czymmek, Tim Holtorf, Tim Holtorf, Stephan Hussmann, Stephan Hussmann, } "CPU architecture for a fast and energy-saving calculation of convolution neural networks", Proc. SPIE 10334, Automated Visual Inspection and Machine Vision II, 103340P (26 June 2017); doi: 10.1117/12.2270290; https://doi.org/10.1117/12.2270290
PROCEEDINGS
9 PAGES


SHARE
RELATED CONTENT


Back to Top