Practical imaging systems form images with spatially-varying blur, making it challenging to deblur them and recover critical scene features. To address such systems, we introduce SeidelNet, a deep-learning approach for spatially varying deblurring which learns to invert an imaging system’s blurring process from a single calibration image. SeidelNet leverages the rotational symmetry present in most imaging systems by incorporating the primary Seidel aberration coefficients into the deblurring pipeline. We train and test SeidelNet on synthetically blurred images from the CARE fluorescence microscopy dataset, and find that, despite relatively few parameters, SeidelNet outperforms both analytical methods as well as a standard deblurring neural network.
|