Astronomical images collected by ground-based telescopes suffer from degradation and perturbations attributed to atmospheric turbulence. We investigate the application of convolutional neural networks (CNNs) to ground-based satellite imaging to address two existing problems. First, the multiframe blind deconvolution (MFBD) algorithms that can extract well-resolved images from these degraded data frames are computationally expensive, requiring supercomputing infrastructure for relatively fast performance and currently cannot be done in real time. Because of this, it is difficult to optimize collection parameters to maximize the likelihood of producing a resolved image with MFBD. Second, the space-object National Imagery Interpretability Rating Scale (SNIIRS) allows human analysts to provide a quantitative score of image quality based on identification of target features. It is naturally difficult to automate this scoring process, not only because the scale is based on identifiable features but also because the images may be in an almost-resolved image quality regime that is difficult to handle for traditional computer vision techniques. For both applications, we present our results using CNNs on data collected at the Maui Space Surveillance Site as well as a new synthetic dataset we introduce containing over a million SNIIRS rated pairings of perturbed and pristine ground-based satellite images. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 5 scholarly publications.
Satellites
Satellite imaging
Earth observing sensors
Machine learning
Sensors
Data modeling
Image quality