14 May 2018 Performance analysis of real-time DNN inference on Raspberry Pi
Author Affiliations +
Abstract
Deep Neural Networks (DNNs) have emerged as the reference processing architecture for the implementation of multiple computer vision tasks. They achieve much higher accuracy than traditional algorithms based on shallow learning. However, it comes at the cost of a substantial increase of computational resources. This constitutes a challenge for embedded vision systems performing edge inference as opposed to cloud processing. In such a demanding scenario, several open-source frameworks have been developed, e.g. Caffe, OpenCV, TensorFlow, Theano, Torch or MXNet. All of these tools enable the deployment of various state-of-the-art DNN models for inference, though each one relies on particular optimization libraries and techniques resulting in different performance behavior. In this paper, we present a comparative study of some of these frameworks in terms of power consumption, throughput and precision for some of the most popular Convolutional Neural Networks (CNN) models. The benchmarking system is Raspberry Pi 3 Model B, a low-cost embedded platform with limited resources. We highlight the advantages and limitations associated with the practical use of the analyzed frameworks. Some guidelines are provided for suitable selection of a specific tool according to prescribed application requirements.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Delia Velasco-Montero, Jorge Fernández-Berni, Ricardo Carmona-Galán, Ángel Rodríguez-Vázquez, "Performance analysis of real-time DNN inference on Raspberry Pi", Proc. SPIE 10670, Real-Time Image and Video Processing 2018, 106700F (14 May 2018); doi: 10.1117/12.2309763; https://doi.org/10.1117/12.2309763
PROCEEDINGS
9 PAGES + PRESENTATION

SHARE
Back to Top