You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
13 May 2019Generative adversarial networks based super resolution of satellite aircraft imagery
Generative Adversarial Networks (GANs) are one of the most popular Machine Learning algorithms developed in recent times, and are a class of neural networks that are used in unsupervised machine learning. The advantage of unsupervised machine learning approaches such as GANs is that they do not need a large amount of labeled data, which is costly and time consuming. GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. In this work, GANs are utilized to solve the single image super-resolution problem. This approach in literature is referred to as super resolution GANs (SRGAN), and employs a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes the solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and the original photo-realistic images, and the content loss is motivated by the perceptual similarity and not the similarity in the pixel space. This paper presents implementation of SRGAN using Deep convolution network applied to both the aerial and satellite imagery of the aircrafts. The results thus obtained are compared with traditional super resolution methods. The resulting estimates of SRGAN are compared against the traditional methods using peak signal to noise ratio (PSNR) and structure similarity index metric (SSIM). The PSNR and SSIM of SRGAN estimates are similar to traditional method such as Bicubic interpolation but traditional methods are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution.
The alert did not successfully save. Please try again later.
Jonathan Chin, Asif Mehmood, "Generative adversarial networks based super resolution of satellite aircraft imagery," Proc. SPIE 10995, Pattern Recognition and Tracking XXX, 109950W (13 May 2019); https://doi.org/10.1117/12.2524720