Semantic image segmentation approaches based on convolutional neural networks require a large amount of pixel-level training data, but the labeling process is time consuming and laborious. In this paper, we propose a semi-supervised semantic segmentation method that can leverage unlabeled data in model training to alleviate the task of labeling. A novel GAN framework comprised of a generator network and a dual discriminator network is proposed, and the entire network is trained by coupling the standard multi-class cross entropy loss with the adversarial loss. To further improve the localization of object boundaries, a self-attention layer is added to the generator network to model long-range dependencies in images, and a skip layer is also added to combine deep layer with highly abstract information and shallow layer with detailed appearance information. The dual discriminator network includes a fully convolutional discriminator and a typical GAN discriminator, so that the input image can be discriminated on both pixel level and image level. For semi-supervised semantic segmentation, the predicted segmentation results of unlabeled images are selected by image-level discriminator, and then their trustworthy regions are generated by pixel-level discriminator to provide additional supervisory signals. Extensive experiments on PASCAL VOC 2012 dataset demonstrate that our approach outperforms existing semi-supervised semantic image segmentation methods on accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.