In recent years, quantizing the weights of a deep neural network draws increasing attention in the area of network compression. An efficient and popular way to quantize the weight parameters is to replace a filter with the product of binary values and a real-valued scaling factor. However, the quantization error of such binarization method raises as the number of a filter's parameter increases. To reduce quantization error in existing network binarization methods, we propose group binary weight networks (GBWN), which divides the channels of each filter into groups and every channel in the same group shares the same scaling factor. We binarize the popular network architectures VGG, ResNet and DesneNet, and verify the performance on CIFAR10, CIFAR100, Fashion-MNIST, SVHN and ImageNet datasets. Experiment results show that GBWN achieves considerable accuracy increment compared to recent network binarization methods, including BinaryConnect, Binary Weight Networks and Stochastic Quantization Binary Weight Networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.