23 December 2017 Translation-aware semantic segmentation via conditional least-square generative adversarial networks
Author Affiliations +
Abstract
Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f -divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.
© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
Mi Zhang, Xiangyun Hu, Like Zhao, Shiyan Pang, Jinqi Gong, Min Luo, "Translation-aware semantic segmentation via conditional least-square generative adversarial networks," Journal of Applied Remote Sensing 11(4), 042622 (23 December 2017). https://doi.org/10.1117/1.JRS.11.042622 . Submission: Received: 17 June 2017; Accepted: 22 November 2017
Received: 17 June 2017; Accepted: 22 November 2017; Published: 23 December 2017
JOURNAL ARTICLE
15 PAGES


SHARE
Back to Top