To obtain sharp images of space targets, high-accuracy restoration of degraded images corrected by an adaptive optics (AO) system is necessary. Existing algorithms are mainly based on the physical constraints of both image and point-spread function (PSF), which are usually continuously estimated in an alternately iterative manner and take a long time to restore blurred images. We propose an end-to-end blind restoration method for ground-based space target images based on conditional generative adversarial network without estimating PSF. The whole network consists of two parts, generator network and discriminator network, which are used for learning the atmospheric degradation process and achieving the purpose of generating restored images. To train the network, a simulated AO image dataset containing 4800 sharp–blur image pairs is constructed by 80 three-dimensional models of space targets combined with degradation of atmospheric turbulence. Experimental results demonstrate that the proposed method not only enhances the restoration accuracy but also improves the restoration efficiency of single-frame object images.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.