Translator Disclaimer
26 June 2017 CPU architecture for a fast and energy-saving calculation of convolution neural networks
Author Affiliations +
Proceedings Volume 10335, Digital Optical Technologies 2017; 103351M (2017)
Event: SPIE Digital Optical Technologies, 2017, Munich, Germany
One of the most difficult problem in the use of artificial neural networks is the computational capacity. Although large search engine companies own specially developed hardware to provide the necessary computing power, for the conventional user only remains the state of the art method, which is the use of a graphic processing unit (GPU) as a computational basis. Although these processors are well suited for large matrix computations, they need massive energy. Therefore a new processor on the basis of a field programmable gate array (FPGA) has been developed and is optimized for the application of deep learning. This processor is presented in this paper. The processor can be adapted for a particular application (in this paper to an organic farming application). The power consumption is only a fraction of a GPU application and should therefore be well suited for energy-saving applications.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Florian J. Knoll, Michael Grelcke, Vitali Czymmek, Tim Holtorf, and Stephan Hussmann "CPU architecture for a fast and energy-saving calculation of convolution neural networks", Proc. SPIE 10335, Digital Optical Technologies 2017, 103351M (26 June 2017);


AI: from deep learning to in-memory computing
Proceedings of SPIE (March 26 2019)
Can robots learn like people do?
Proceedings of SPIE (August 01 1990)
DDGIPS: a general image processing system in robot vision
Proceedings of SPIE (October 10 2000)

Back to Top