Paper
9 August 2018 An efficient accelerator unit for sparse convolutional neural network
Yulin Zhao, Donghui Wang, Leiou Wang
Author Affiliations +
Proceedings Volume 10806, Tenth International Conference on Digital Image Processing (ICDIP 2018); 108061Z (2018) https://doi.org/10.1117/12.2503042
Event: Tenth International Conference on Digital Image Processing (ICDIP 2018), 2018, Shanghai, China
Abstract
Convolutional neural network is widely used in image recognition. The associated model is computationally demanding. Several solutions are proposed to accelerate its computation. Sparse neural network is an effective way to reduce the computational complexity of neural networks. However, most of the current acceleration programs do not make full use of this feature. In this paper, we design an acceleration unit, using FPGA as the hardware platform. The accelerator unit achieves parallel acceleration through multiple CU models. It eliminates the unnecessary operations by the Match model to improve efficiency. The experimental results show that when the sparsity is ninety percent, the performance can be increased to 3.2 times.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yulin Zhao, Donghui Wang, and Leiou Wang "An efficient accelerator unit for sparse convolutional neural network", Proc. SPIE 10806, Tenth International Conference on Digital Image Processing (ICDIP 2018), 108061Z (9 August 2018); https://doi.org/10.1117/12.2503042
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Copper

Data modeling

Control systems

Convolution

Convolutional neural networks

Field programmable gate arrays

Back to Top