Synthetic aperture radar (SAR) images and optical images have found widespread applications in various remote sensing fields. Although optical images are perceptible to the human visual system, they are susceptible to weather conditions and sunlight illumination. SAR can work all day and in all weather conditions, but SAR images suffer from geometric distortions, shadows, and speckle noise, severely affecting the human eye recognition of SAR images. SAR-to-optical (S2O) image translation can integrate the advantages of two types of images for the practical requirements in remote sensing applications. Existing methods for S2O image translation often produce low-quality translated images with notable deficiencies in color distribution and detail preservation. To address these problems, we propose a domain transfer generative adversarial network (DTGAN) based on U-Net and transformer for S2O image translation. The generator of DTGAN employs U-Net as a framework and incorporates two modules, a spatial transformer module, and a multi-scale channel transformer module. We additionally apply a multi-scale discriminator to train the generator. Extensive experiments are performed with other state-of-the-art methods, and the visual and quantitative evaluation results show that our method not only achieves the optimal metrics but also translates images that are more suitable for observation and analysis. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one