In this work we utilize generative adversarial networks (GANs) to synthesize realistic transformations for remote sensing imagery in the multispectral domain. Despite the apparent perceptual realism of the transformed images at a first glance, we show that a deep learning classifier can very easily be trained to differentiate between real and GAN-generated images, likely due to subtle but pervasive artifacts introduced by the GAN during the synthesis process. We also show that a very low-amplitude adversarial attack can easily fool the aforementioned deep learning classifier, although these types of attacks can be partially mitigated via adversarial training. Finally, we explore the features utilized by the classifier to differentiate real images from GAN-generated ones, and how adversarial training causes the classifier to focus on different, lower-frequency features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.