Translator Disclaimer
Paper
6 May 2019 An improved YOLOv2 model with depth-wise separable convolutional layers for object detection
Author Affiliations +
Proceedings Volume 11069, Tenth International Conference on Graphics and Image Processing (ICGIP 2018); 110693V (2019) https://doi.org/10.1117/12.2524181
Event: Tenth International Conference on Graphic and Image Processing (ICGIP 2018), 2018, Chengdu, China
Abstract
Object detection is the basic research direction in the field of computer vision. It provides basic image information data for other advanced computer vision processing and analysis tasks. With the continuous breakthrough of deep machine learning technology, especially convolutional neural network model in the field of digital image processing shows a strong ability to extract image features. By choosing the depth separable convolution layer to replace the standard convolution layer used in the traditional model, the number of parameters of CNN network model is compressed. Depth Separable Convolution Layer (DSCL) decomposes the standard convolution layer factor into depth convolution layer and point convolution layer, and extracts and merges image features in two steps to reduce the number of parameters. By introducing a depth-separable convolution layer instead of a standard convolution layer, the number of parameters of the model convolution layer is reduced by 78.1%. We choose image feature pyramid network to fuse the image features extracted from each layer of CNN network, so that the target detection model can use matching image fusion features for different size and shape of the target to be detected. The average detection precision on the PASCAL VOC dataset increased to 77.5%.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhuo Han, Dongfei Wang, Aidong Men, and Yun Zhou "An improved YOLOv2 model with depth-wise separable convolutional layers for object detection", Proc. SPIE 11069, Tenth International Conference on Graphics and Image Processing (ICGIP 2018), 110693V (6 May 2019); https://doi.org/10.1117/12.2524181
PROCEEDINGS
8 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT


Back to Top