5 September 2024 ECGNet: edge and class guided semantic segmentation network for remote sensing urban scene images
Hongrong Liu, Minghua Liu, Shuhua Song, Guolong Guo, Zhengyi Yuan, Kai Chen, Shuai Yang, Jiangfeng Yu, Hongwei Zhang
Author Affiliations +
Abstract

Semantic segmentation of remote sensing images in urban scenes suffers from blurred multi-scale target boundaries, insufficient use of global context, and classification errors caused by high inter-class variance and low intra-class variance. Therefore, we propose a semantic segmentation network with edge and class guidance (ECGNet). First, ECGNet introduces multi-scale edge prior knowledge to address the problem of blurred target boundaries. Second, ECGNet applies synergistic class augmented attention to introduce class prior knowledge while retaining rich spatial dimensional localization information to alleviate the problem of classification errors caused by low intra-class variance and high inter-class variance. Finally, the multi-scale large receptive field attention in ECGNet simulates a large convolutional kernel to capture multi-scale global context information. Experiments conducted on the ISPRS Vaihingen and ISPRS Potsdam datasets show that the proposed method is competitive.

© 2024 Society of Photo-Optical Instrumentation Engineers (SPIE)
Hongrong Liu, Minghua Liu, Shuhua Song, Guolong Guo, Zhengyi Yuan, Kai Chen, Shuai Yang, Jiangfeng Yu, and Hongwei Zhang "ECGNet: edge and class guided semantic segmentation network for remote sensing urban scene images," Journal of Applied Remote Sensing 18(3), 034518 (5 September 2024). https://doi.org/10.1117/1.JRS.18.034518
Received: 17 January 2024; Accepted: 12 August 2024; Published: 5 September 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Semantics

Prior knowledge

Remote sensing

Convolution

Visualization

Transformers

Back to Top