In this work we utilize generative adversarial networks (GANs) to synthesize realistic transformations for remote sensing imagery in the multispectral domain. Despite the apparent perceptual realism of the transformed images at a first glance, we show that a deep learning classifier can very easily be trained to differentiate between real and GAN-generated images, likely due to subtle but pervasive artifacts introduced by the GAN during the synthesis process. We also show that a very low-amplitude adversarial attack can easily fool the aforementioned deep learning classifier, although these types of attacks can be partially mitigated via adversarial training. Finally, we explore the features utilized by the classifier to differentiate real images from GAN-generated ones, and how adversarial training causes the classifier to focus on different, lower-frequency features.
Global climate warming is rapidly reducing Arctic sea ice volume and extent. The associated perennial sea ice loss has economic and global security implications associated with Arctic Ocean navigability, since sea ice cover dictates whether an Arctic route is open to shipping. Thus, understanding changes in sea ice thickness, concentration and drift is essential for operation planning and routing. However, changes in sea ice cover on scales up to a few days and kilometers are challenging to detect and forecast; current sea ice models may not capture quickly-changing conditions on short timescales needed for navigation. Assimilating these predictive models requires frequent, high-resolution morphological information about the pack, which is operationally difficult. We suggest an approach to mitigate this challenge by using machine learning (ML) to interpret satellite-based synthetic aperture radar (SAR) imagery. In this study, we derive ML models for the analysis of SAR data to improve short-term local sea ice monitoring at high spatial resolutions, enabling more accurate analysis of Arctic navigability. We develop an algorithm/classifier that can analyze Sentinel-1 SAR imagery with the potential to inform operational sea ice forecasting models. We focus on detecting two sea ice features of interest to Arctic navigability: ridges and leads (fractures in the ice shelf). These can be considered local extremes in terms of ice thickness, a crucial parameter for navigation. We build models to detect these ice features using machine learning techniques. Both our ridge and lead detection models perform as well as, if not better than, state-of-the- art methods. These models demonstrate Sentinel-1's ability to capture sea ice conditions, suggesting the potential for Sentinel-1 global coverage imagery to inform sea ice forecasting models.
In this work we demonstrate that generative adversarial networks (GANs) can be used to generate realistic pervasive changes in RGB remote sensing imagery, even in an unpaired training setting. We investigate some transformation quality metrics based on deep embedding of the generated and real images which enable visualization and understanding of the training dynamics of the GAN, and provide a useful measure in terms of quantifying how distinguishable the generated images are from real images. We also identify some artifacts introduced by the GAN in the generated images, which are likely to contribute to the differences seen between the real and generated samples in the deep embedding feature space even in cases where the real and generated samples appear perceptually similar.
Combining multiple satellite remote sensing sources provides a far richer, more frequent view of the earth than that of any single source; the challenge is in distilling these petabytes of heterogeneous sensor imagery into meaningful characterizations of the imaged areas. To meet this challenge requires effective algorithms for combining heterogeneous data to identify subtle but important changes among the intrinsic data variation. The major obstacle to using heterogeneous satellite data to monitor anomalous changes across time is this: subtle but real changes on the ground can be overwhelmed by artifacts that are simply due to the change in modality. Here, we implement a joint-distribution framework for anomalous change detection that can effectively "normalize" for these changes in modality, and does not require any phenomenological resampling of the pixel signal. This flexibility enables the use of satellite imagery from different sensor platforms and modalities. We use multi-year construction of the Los Angeles Stadium at Hollywood Park (in Inglewood, CA) as our testbed, and exploit synthetic aperture radar (SAR) imagery from Sentinel-1 and multispectral imagery from both Sentinel-2 and Landsat 8. We explore results for anomalous change detection between Sentinel-2 and Landsat 8 over time, and also show results for anomalous change detection between Sentinel-1 SAR imagery and Sentinel-2 multispectral imagery.