Instrumental aberrations strongly limit high-contrast imaging of exoplanets, especially when they produce quasistatic speckles in the science images. With the help of recent advances in deep learning, we have developed in previous works an approach that applies convolutional neural networks (CNN) to estimate pupil-plane phase aberrations from point spread functions (PSF). In this work we take a step further by incorporating into the deep learning architecture the physical simulation of the optical propagation occurring inside the instrument. This is achieved with an autoencoder architecture, which uses a differentiable optical simulator as the decoder. Because this unsupervised learning approach reconstructs the PSFs, knowing the true phase is not needed to train the models, making it particularly promising for on-sky applications. We show that the performance of our method is almost identical to a standard CNN approach, and that the models are sufficiently stable in terms of training and robustness. We notably illustrate how we can benefit from the simulator-based autoencoder architecture by quickly fine-tuning the models on a single test image, achieving much better performance when the PSFs contain more noise and aberrations. These early results are very promising and future steps have been identified to apply the method on real data.
|