KEYWORDS: Deblurring, Imaging systems, Analytic models, Monochromatic aberrations, Point spread functions, Education and training, Deep learning, Calibration, Systems modeling, Data modeling
Practical imaging systems form images with spatially-varying blur, making it challenging to deblur them and recover critical scene features. To address such systems, we introduce SeidelNet, a deep-learning approach for spatially varying deblurring which learns to invert an imaging system’s blurring process from a single calibration image. SeidelNet leverages the rotational symmetry present in most imaging systems by incorporating the primary Seidel aberration coefficients into the deblurring pipeline. We train and test SeidelNet on synthetically blurred images from the CARE fluorescence microscopy dataset, and find that, despite relatively few parameters, SeidelNet outperforms both analytical methods as well as a standard deblurring neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.