From Event: SPIE Defense + Commercial Sensing, 2019
Image-based wavefront sensing uses a physical model of an aberrated pupil to simulate a point-spread function (PSF) that attempts to match measured data. Nonlinear optimization is used to update parameters corresponding to the wavefront. If the starting guess for the wavefront is too far from the true solution, these nonlinear optimization techniques are unlikely to converge. We trained a convolutional neural network (CNN) based on Google's Inception v3 architecture to predict Zernike coefficients from simulated images of PSFs with simulated noise added. These coefficients were used as starting guesses for nonlinear optimization techniques. We performed Monte Carlo analysis to compare these predicted coefficients to 30 random starting guesses for total root-mean-square (RMS) wavefront errors (WFE) ranging from 0.25 waves to 4.0 waves. We found that our CNN's predictions were more likely to converge than 30 random starting guesses for RMS WFE larger than 0.5 waves.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Scott W. Paine and James R. Fienup, "Machine learning for avoiding stagnation in image-based wavefront sensing," Proc. SPIE 10980, Image Sensing Technologies: Materials, Devices, Systems, and Applications VI, 109800T (Presented at SPIE Defense + Commercial Sensing: April 16, 2019; Published: 13 May 2019); https://doi.org/10.1117/12.2519884.