Paper
29 March 2023 Comparative study of DNN accelerators on FPGA
Yixuan Zhao, Feiyang Liu, Tanbao Yan, Han Gao
Author Affiliations +
Proceedings Volume 12594, Second International Conference on Electronic Information Engineering and Computer Communication (EIECC 2022); 125940G (2023) https://doi.org/10.1117/12.2671578
Event: Second International Conference on Electronic Information Engineering and Computer Communication (EIECC 2022), 2022, Xi'an, China
Abstract
As an important technology and research direction to achieve AI, deep learning has been widely applied in the fields of computer vision, speech recognition, and natural language processing. How to effectively accelerate the computing power of deep learning has always been the focus of scientific research. Among various acceleration technologies, FPGA has the advantages of reconfiguration, high performance, small size, and low latency. As more and more FPGA-based neural network accelerators are developed, we notice there is a lack of a complete and detailed overview. In this paper, we give a comparative study of DNN accelerators on FPGA from the aspects of hardware structures, design ideas and optimization strategies. We further compare the performance of different acceleration technologies in different models and present the prospects of the FPGA accelerators for deep learning.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yixuan Zhao, Feiyang Liu, Tanbao Yan, and Han Gao "Comparative study of DNN accelerators on FPGA", Proc. SPIE 12594, Second International Conference on Electronic Information Engineering and Computer Communication (EIECC 2022), 125940G (29 March 2023); https://doi.org/10.1117/12.2671578
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Field programmable gate arrays

Performance modeling

Deep learning

Computer hardware

Speech recognition

Data modeling

Image processing

Back to Top