Semantic segmentation, as a dense pixelwise classification task, is of great significance to scene understanding. Many approaches based on convolutional neural network still suffer from two kinds of challenges: (1) insufficient semantic information results in semantic obfuscation between similar categories, (2) loss of spatial information leads to inaccurate location of inconspicuous objects. To tackle these challenges, we design a network with an encoder–decoder architecture based on two proposed modules: global pyramid attention module (GPAM) and pyramid decoder module (PDM). Specifically, GPAM exploits an attention mechanism as global prior knowledge to adaptively capture discriminative features for enhancing semantic representation, and PDM employs small convolutions connected in parallel to predict adjacent position relationships for refining spatial information. A series of ablation experiments are conducted to demonstrate the effectiveness of our designs, and our network achieves a mean intersection over union score of 83.4% on PASCAL VOC 2012 dataset and 78.5% on Cityscapes dataset.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.