13 April 2018 Squeeze-SegNet: a new fast deep convolutional neural network for semantic segmentation
Author Affiliations +
Proceedings Volume 10696, Tenth International Conference on Machine Vision (ICMV 2017); 106962O (2018) https://doi.org/10.1117/12.2309497
Event: Tenth International Conference on Machine Vision, 2017, Vienna, Austria
Abstract
The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters than SegNet.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Geraldin Nanfack, Geraldin Nanfack, Azeddine Elhassouny, Azeddine Elhassouny, Rachid Oulad Haj Thami, Rachid Oulad Haj Thami, } "Squeeze-SegNet: a new fast deep convolutional neural network for semantic segmentation", Proc. SPIE 10696, Tenth International Conference on Machine Vision (ICMV 2017), 106962O (13 April 2018); doi: 10.1117/12.2309497; https://doi.org/10.1117/12.2309497
PROCEEDINGS
8 PAGES


SHARE
Back to Top