Translator Disclaimer
12 June 2020 Generative image inpainting with residual attention learning
Author Affiliations +
Proceedings Volume 11519, Twelfth International Conference on Digital Image Processing (ICDIP 2020); 115190O (2020) https://doi.org/10.1117/12.2573109
Event: Twelfth International Conference on Digital Image Processing, 2020, Osaka, Japan
Abstract
Recent deep learning approaches have shown significant improvements in the challenging task of image inpainting. However, these methods may generate blurry output and distorted textures. In this paper, we propose an efficient end-to-end two-stage network for image inpainting. In the coarse stage, we employ residual dense block (RDB) as well as short and long skip connections to fully leverage and exploit features from all convolutional layers and give a globally rough reconstruction. In the refinement stage, we propose a local and global residual network based on channel and spatial attention block (CSAB) to adaptively weigh both channel-wise and spatial-wise features focusing on more meaningful information, and generate a locally fine-detailed image. Experiments on Paris StreetView and DTD textures demonstrate the effectiveness and efficiency of our method. Results show that our method outperforms the baseline techniques quantitatively and qualitatively.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Fang Wan, Yuesheng Zhu, and Zehua Cai "Generative image inpainting with residual attention learning", Proc. SPIE 11519, Twelfth International Conference on Digital Image Processing (ICDIP 2020), 115190O (12 June 2020); https://doi.org/10.1117/12.2573109
PROCEEDINGS
11 PAGES


SHARE
Advertisement
Advertisement
Back to Top