Resolution enhancement with deblurring by pixel reassignment

Improving the spatial resolution of a fluorescence microscope has been an ongoing challenge in the imaging community. To address this challenge, a variety of approaches have been taken, ranging from instrumentation development to image postprocessing. An example of the latter is deconvolution, where images are numerically deblurred based on a knowledge of the microscope point spread function. However, deconvolution can easily lead to noise-amplification artifacts. Deblurring by postprocessing can also lead to negativities or fail to conserve local linearity between sample and image. We describe here a simple image deblurring algorithm based on pixel reassignment that inherently avoids such artifacts and can be applied to general microscope modalities and fluorophore types. Our algorithm helps distinguish nearby fluorophores, even when these are separated by distances smaller than the conventional resolution limit, helping facilitate, for example, the application of single-molecule localization microscopy in dense samples. We demonstrate the versatility and performance of our algorithm under a variety of imaging conditions.


Introduction
The spatial resolution of conventional fluorescence microscopy is limited to about half the emission wavelength because of diffraction. 1 This limit can be surpassed using a variety of superresolution approaches.For example, techniques such as STED, 2-6 SIM, [7][8][9][10][11][12][13] or ISM [14][15][16][17] generally require some form of scanning, either of a single excitation focus or of excitation patterns.Alternatively, scanning-free superresolution can be achieved by exploiting the blinking nature of certain fluorophores, as popularized initially by PALM 18 and STORM. 19In this latter approach, individual molecules are localized one by one based on the premise, typically, that the most likely location of the molecules is at the centroid of their respective emission point spread function (PSF).A distinct advantage of single-molecule localization microscopy (SMLM) is that it can be implemented with a conventional camera-based fluorescence microscope, meaning that its barrier to entry is low and it can fully benefit from the most recent advances in camera technology (high quantum efficiency, low noise, massive pixel numbers, etc.).However, a key requirement of SMLM is that the imaged molecules are sparsely distributed, as ensured, for example, by photoactivation.This sparsity requirement implies, in turn, that several raw images, each sparse, are required to synthesize a final, less sparse, superresolved image.Efforts have been made to partially alleviate this sparsity constraint in SMLM, such as SOFI, 20 3B analysis, 21 DeconSTORM, 22 and MUSICAL. 23dditional approaches not requiring molecular blinking have involved sparsifying the illumination, for example, with speckle illumination 24 or sparsifying the sample itself by tissue expansion ExM. 25,26However, in all cases (excluding ExM) the requirement remains that several images, often thousands, are needed to produce an acceptable final image.As such, livecell imaging is precluded, and sparsity-based superresolution approaches have been almost always limited to imaging fixed samples (though see Refs.27-29).
In recent years, it has been noted that the sparsity constraint can be partially alleviated by pre-sharpening the raw images.
Example algorithms are SRRF 30,31 and MSSR, 32 which are freely available and easy to use.In contrast to DeconSTORM, these algorithms make only minimal assumptions about the emission PSF (radiality in the case of SRRF; convexity in the case of MSSR), and their application can substantially reduce the number of raw images required for SMLM.Indeed, when applied to denser images, only few images or even a single raw image can produce results quite comparable to much more time-consuming superresolution approaches.However, these algorithms are not without drawbacks.][32] Moreover, when applied to samples that are too dense, such as samples that exhibit features locally extended in 2D, those features tend to be hollowed out and only their edges or spines are preserved, meaning that SRRF and MSSR are most applicable to samples containing only point-or line-like features smaller than the PSF, but provide poor fidelity otherwise.
We present an alternative image-sharpening approach that is similar to SRRF and MSSR, but has the advantage of inherently preserving image intensities and being more generally applicable.Like SRRF and MSSR, our approach can be applied to a wide variety of fluorescence microscopes, where we make only minimal assumptions about the emission PSF, namely, that the PSF centroid is located at its peak.Also, like SRRF and MSSR, our approach can be applied to a sequence of raw images, allowing a temporal analysis of blinking or fluctuation statistics, or it can be applied to only a few or even a single image.Our approach is based on postprocessing by pixel reassignment, producing a deblurring effect similar to deconvolution but without the drawbacks associated with conventional deconvolution algorithms.We describe the basic principle of deblurring by pixel reassignment (DPR) and compare its performance to SRRF and MSSR both experimentally and with simulated data.Our DPR algorithm is made available as a MATLAB function.

Principle of DPR
Fundamental to any linear imaging technique is the concept of a PSF: point sources in a sample produce light distributions at the imaging plane that are blurred by a convolution operation with the PSF.Because the width of the PSF is finite, so too is the image resolution.In principle, if the PSF is known exactly, the blurring caused by convolution can be undone numerically by deconvolution; however, in practice, such deblurring is hampered by fundamental limitations.For one, the Fourier transform of the PSF (or optical transfer function-OTF) provides a spatial frequency support inherently limited by the finite size of the microscope pupil, meaning that spatial frequencies beyond this diffraction limit are identically zero and cannot be recovered by deconvolution, even in principle (unless aided by assumptions, [33][34][35] such as sample analyticity, continuity, and sparsity).Another limitation, no less fundamental, is the problem of noise.In conventional fluorescence microscopy, the OTF tapers to very small values as it approaches the diffraction limit and falls below the shot-noise level generally well below this limit.As such, any attempt to amplify the high-frequency content of the OTF by deconvolution only ends up amplifying noise.This problem is particularly egregious with Wiener deconvolution, 36 where noise amplification easily leads to unacceptable image mottling and also with Richardson-Lucy (RL) deconvolution 37,38 when implemented with too many iterations.Regularization is required to dampen such noise-induced artifacts, often to the point that when applied to conventional fluorescence microscopy, deconvolution only marginally improves resolution if at all (note that deconvolution fares better with nonconventional microscopies, such as SIM or ISM, where the OTF tapering near the diffraction limit is less severe).
The purpose of DPR is to perform PSF sharpening similar to deconvolution, but in a manner less prone to noise-induced artifacts and without the requirement of a full model for the PSF.Unlike Wiener deconvolution, which is performed in Fourier space using a division operation, DPR operates entirely in real space with no division operation that can egregiously amplify noise.Unlike RL deconvolution, DPR is noniterative and can be performed in a single pass, without the need for an arbitrary iteration-termination criterion.DPR relies solely on pixel reassignment.As such, no negativities are possible in the final image reconstruction, as is often encountered, for example, with Wiener deconvolution or image sharpening with a Laplacian filter. 39Moreover, intensity levels are rigorously conserved, with no requirement of additional procedures to ensure local linearity, as needed, for example, with SRRF, MSSR, or even SOFI. 40he basic principle of DPR is schematically shown in Fig. 1 and described in more detail in Sec. 5.In brief, raw fluorescence images are first preconditioned by (1) performing global background subtraction, (2) normalizing to the overall maximum value in the image, and (3) re-mapping by interpolation to a coordinate system of grid period given by roughly one-eighth of the full width at half-maximum (FWHM) of the PSF.The purpose of such preconditioning is to standardize the raw images prior to the application of DPR.The actual sharpening of the image is then performed by pixel reassignment, where intensities (pixel values) at each grid location (pixel) are reassigned to neighboring locations according to the direction and magnitude of the locally normalized image gradient (or, equivalently, the log-image gradient), scaled by a gain parameter.Because pixels are generally reassigned to off-grid locations, their pixel values are distributed to the nearest on-grid reassigned locations as weighted by their proximity (see Fig. S1 in the Supplementary Material).Finally, an assurance is included that pixels can be displaced no farther than 1.25 times the PSF FWHM width.
As a simple example, consider imaging a point source with a Gaussian PSF of root-mean-square (RMS) width σ.The gradient of the log-PSF is then linear.That is, the pixels are reassigned toward the PSF center exactly in proportion to their distance from the center, where the proportionality factor is selected by a gain parameter.The resultant sharpening of the PSF is substantial.For DPR gains 1 and 2, we find that the PSF widths are reduced by factors 4 and 7, respectively (see Fig. S2 in the Supplementary Material).
Conventionally, the resolution of a microscope is defined by its capacity to distinguish two point sources.More specifically, it is defined by the minimum separation distance required for two points to be resolved based on a predefined criterion, such as the Sparrow or Rayleigh criterion.We again consider the example of a Gaussian PSF, but now with two point sources.According to the Sparrow and Rayleigh criteria, the two points would have to be separated by 2.2σ and 2.8σ to be resolvable, respectively.With the application of DPR, we find that this separation distance can be reduced.A clear dip between the two points by a factor of 0.74 is observed at a separation distance of 1.66σ for DPR gain 1, and an even smaller separation of 1.43σ for DPR gain 2. Indeed, we find that the two points remain resolvable down to separation distances of 1.36σ and 1.20σ for gains 1 and 2, respectively (Sparrow criterion), corresponding to resolution enhancements of 0.62 and 0.55 relative to the Sparrow limit, or, equivalently, 0.59 and 0.51 relative to the Rayleigh limit [see Fig. S3(b) in the Supplementary Material].It should be noted, however, that this enhanced capacity to resolve two nearby points is not entirely error-free.For example, when DPR is applied to points separated by less than about 1.9σ, the points begin to appear somewhat closer to each other than they actually are, with a relative error that increases with gain [see Fig. S3(c) in the Supplementary Material].That is, the choice of using DPR gain 1 or 2 (or other gain parameters) should be made at the user's discretion, bearing in mind this trade-off between resolution capacity and accuracy.Similar resolution enhancement results are obtained when DPR is applied to two line objects.Here, we use raw data acquired by an Airyscan microscope obtained from Refs.32 and 41 [Fig.2(b)].In the raw image, lines separated by 150 nm cannot be resolved, whereas after the application of DPR with gains 1 and 2, they can be resolved at separations of 90 and 30 nm, respectively.The intensity profiles across the full set of line pairs for raw, DPR gain 1, and DPR gain 2 images are shown in Fig. S4(a) in the Supplementary Material.DPR images of the same sample acquired by conventional confocal microscopy 32,41 are shown in Fig. S4(b) in the Supplementary Material.In this case, lines separated by 210 nm cannot be resolved in the raw data set, whereas after application of DPR with gains 1 and 2, they can be resolved at separations of 120 and 90 nm, respectively.
To gain an appreciation of the effect of noise on DPR, we again simulated images of two point objects and two line objects, this time separated by 160 nm and imaged with a Gaussian PSF of RMS 84.93 nm.To these images we added shot noise (Poissonian) and additive camera readout noise (Gaussian) of different strengths, leading to SNR values of 5.0, 7.7, 14.1, and 20.3 (see Fig. S5 in the Supplementary Material).DPR gain 1 was applied to a stack of images, each with a different noise realization.The resulting resolutionenhanced images were then averaged over different numbers of frames (10, 20, and 40).Manifestly, the final image quality improves with increasing SNR and/or increasing numbers of frames averaged, as expected.Accordingly, the error in the measured separation between the two point objects and the two line objects as inferred by the separation between their peaks in the images also decreases [see Fig. S5(c) in the Supplementary Material].The images of line objects were less sensitive to noise, as evidenced by the relatively stable separation errors across various SNRs, but they exhibited somewhat higher separation error compared to the images of the two point objects.These results are qualitative only.Nevertheless, they provide a rough indication of the increase in enhancement fidelity with SNR.

DPR Applied to Single-Molecule Localization Imaging
To demonstrate the resolution enhancement capacity of DPR with experimental data, we applied it to SMLM images.We used raw images made publicly available through the SMLM Challenge 2016, 42 as these provide a convenient standardization benchmark.The experimental data consisted of a 4000-frame sequence of STORM images of microtubules labeled with Alexa567.
We applied DPR separately to each frame.Similar to SRRF and MSSR, we included in our DPR algorithm the possibility of temporal analysis of DPR-enhanced images.Here, the temporal analysis is simple and consists either of averaging in time the DPR-enhanced images or calculating their variance in time (as is done, for example, with SOFI imaging of order 2).The results are shown in Figs.3(a gain 2 leads to greater resolution enhancement than gain 1.Moreover, as expected, the temporal variance analysis leads to enhanced image contrast, since it preferentially preserves fluctuating signals while removing nonfluctuating backgrounds.However, it should be noted that temporal variance analysis no longer preserves a linearity between sample and image strengths, as opposed to temporal averaging.Interestingly, when a temporal average was applied to the raw images prior to the application of DPR [i.e., when the order of DPR and averaging was reversed-Figs.3(c) and 3(d)], DPR continued to provide resolution enhancement, but not as effectively as when DPR was applied separately to each raw frame.The reason for this is clear.DPR relies on the presence of spatial structure in the image, which is largely washed out by averaging.In other words, similar to SRRF and MSSR, DPR is most effective when imaging sparse samples, as indeed is a requirement for SMLM.

DPR Maintains Imaging Fidelity
DPR reassigns pixels according to their gradients.If the gradients are zero, the pixels remain in their initial position.That is, when imaging structures larger than the PSF that present gradients only around their edges but not within their interior, DPR sharpens only the edges while leaving the structure interiors unchanged.This differs, for example, from SRRF or MSSR, which erode or hollow out the interiors erroneously.DPR can thus be applied to more general imaging scenarios where samples contain both small and large structures.This is apparent, for example, when imaging a Siemens star target, as shown in Fig. S6 in the Supplementary Material, where neither SRRF nor MSSR accurately represents the widening of the star spokes.For example, when we applied NanoJ-SQUIRREL 43 to the DPR-enhanced, MSSR-enhanced, and SRRF-enhanced Siemens star target images, we found resolution-scaled errors 43 (RSEs) given by 53.5, 95.4, and 102.6, respectively; and resolution-scaled Pearson coefficients 43 (RSPs) given by 0.92, 0.54, and 0.75, respectively.This improved fidelity is apparent, also, in other imaging scenarios.In Fig. 4, we show results obtained from the image of Alexa488-labeled bovine pulmonary artery endothelial (BPAE) cells (ThermoFisher, FluoCells) acquired with a conventional laser scanning confocal microscope (Olympus FLUOVIEW FV3000; objective, 40× air, 0.9 NA; confocal pinhole set to 0.23× Airy units; PSF FWHM, 256.4 nm; pixel size, 73.3 nm).While the intensity profiles along a single F-actin filament (red segments in Fig. 4) are sharpened roughly equally between SRRF, MSSR, and DPR, the differences begin to appear for intensity profiles spanning nearby F-actin filaments (yellow segments in Fig. 4) or the imaging of larger structures (e.g., the cluster in the blue box Fig. 4).
A difficulty when evaluating image fidelity is the need for a ground truth as a reference.To serve as a surrogate ground truth, we obtained images of BPAE cells with a state-of-the-art Nikon CS-WU1 spinning disk microscope equipped with a superresolution module (SoRa) based on optical pixel reassignment 44 (objective, 60× oil, 1.42 NA; PSF FWHM, 162.5 nmconventional configuration, 114.9 nm-SoRa configuration; pixel size, 108.3 nm-conventional configuration, 27.1 nm-SoRa configuration), to which we additionally applied 20 iterations of RL deconvolution using the software supplied by the manufacturer.The same BPAE cells were also imaged at conventional (2× lower) resolution without the presence of the SoRa module.We then applied DPR, SRRF, and MSSR to the conventional resolution image for comparison (Fig. 5).
As shown in the zoomed-in regions in Fig. 5(b), conventional confocal microscopy is not able to resolve two closely separated filaments, even after the application of RL deconvolution.In contrast, DPR is easily able to resolve the filaments at both gains 1 and 2, providing images similar to the SoRa superresolution ground-truth images.SRRF and MSSR also sharpened the filaments, but in the case of SRRF, the filaments remained difficult to resolve, while in both cases there was significant intensity dropout where the filaments disappeared altogether.When we applied NanoJ-SQUIRREL to compare the image fidelities of DPR, SRRF, and MSSR, we found the RSEs to be 11.35

DPR Applied to Engineered Cardiac Tissue Imaging
To demonstrate the ability of DPR to enhance image information, we performed structural imaging of engineered cardiac micro-bundles derived from human-induced pluripotent stem cells (hiPSCs), which have recently gained interest as model systems to study human cardiomyocytes (CMs). 45,46We first imaged a monolayer of green fluorescent protein (GFP)-labeled hiPSC-CMs with a confocal microscope of sufficient resolution to reveal the z-discs of sarcomeres.This image serves as a ground-truth reference.We then simulated a series of 45 conventional wide-field images by numerically convolving the ground-truth image with a low-resolution wide-field PSF and adding simulated detection noise (shot noise and additive camera noise).DPR was applied to the conventional images, leading to the deblurred image shown in Fig. 6(a).Manifestly, this deblurred image much more closely resembles the ground-truth image, as confirmed by the structural similarity index (SSIM); 47 the SSIM between the conventional and ground-truth images was 0.4, whereas, between the DPR and ground truth images, it was enhanced to 0.6.This increased fidelity is further validated by the pixel-wise error maps shown in Fig. 6(a), and by the line profiles through a sarcomere (cyan rectangle) showing that the application of DPR leads to better resolution of the z-discs, with the number and location of the z-discs being consistent with the ground-truth image.
We also performed imaging of hiPSC cardiomyocyte tissue organoids (hiPSC-CMTs).Such imaging is more challenging because of the increased thickness of the organoids (about 400 μm), which led to increased background and scattering-induced blurring, and also because of their irregular shapes, which led to aberrations.Again, we performed confocal imaging, this time at both low and high resolutions [Figs.6(b) and 6(c)], with lateral resolutions measured to be 1.5 and 0.5 μm, respectively, based on the FWHM of subdiffraction-sized fluorescent beads.As expected, low-resolution imaging failed to clearly resolve the z-discs (the separation between z-discs is in the range of 1.8 to 2.0 μm, depending on sarcomere maturity).However, when DPR was applied to the low-resolution image, the z-discs became resolvable [Fig.6(b)].DPR was further applied to the high-resolution image, resulting in an even greater enhancement of resolution [Fig.6(c)].

DPR Applied to Volumetric Zebrafish Imaging
In recent years, there has been a push to develop microscopes capable of imaging populations of cells within extended volumes at high spatiotemporal resolution.One such microscope is based on confocal imaging with multiple axially distributed pinholes, enabling simultaneous multiplane imaging. 48,49owever, in its most simple implementation, multi-z confocal microscopy is based on low NA illumination and provides only limited spatial resolution, roughly 2.6 μm lateral and 15 μm axial.While such resolution is adequate for distinguishing neuronal somata in animals, such as mice and zebrafish, it is inadequate for distinguishing, for example, nearby dendritic processes.To demonstrate the applicability of DPR to different types of microscopes, we applied it here to zebrafish images acquired with a multi-z confocal microscope essentially identical  DPR is applied, the axons in both the brain and tail regions become deblurred and clearly resolvable, enabling a cleaner separation between image planes [Figs.7(a) and 7(d)].We note that these results are more qualitative than quantitative, since no ground truth was available for reference.Nevertheless, we did compare our DPR results with raw images obtained by a different multi-z system equipped with a diffractive optical element in the excitation path to enable higher-resolution imaging (0.50 μm lateral and 3.6 μm axial). 50A comparison is shown in Fig. 7(d) (different fish, different tail regions-see also Fig. S9 in the Supplementary Material), illustrating the qualitative similarity between low-resolution multi-z images enhanced with DPR and directly acquired higher-resolution images.

Discussion
The purpose of DPR is to help counteract the blurring induced by the PSF of a fluorescence microscope.The underlying assumption of DPR is that fluorescent sources are located by their associated PSF centroids, which are found by hill climbing directed by local intensity gradients. 51,52When applied to individual fluorescence images, DPR helps distinguish nearby sources, even when these are separated by distances smaller than the Sparrow or Rayleigh limit.In other words, DPR can provide resolution enhancement even in densely labeled samples.Such resolution enhancement is akin to image sharpening, with the advantage that DPR is performed in real space rather than Fourier space, and that local intensities are inherently preserved and negativities are impossible (Table S1 in the Supplementary Material).
To define what is meant by the term local here, we can directly compare the intensities of raw and DPR-enhanced images.When both images are spatially filtered by average blurring, the differences between their intensities become increasingly small with increasing kernel size of the average filter (see Fig. S10 in the Supplementary Material).Indeed, the relative differences, as characterized by the difference standard deviations, drop to <7.7% when the kernel size is smaller than about 4.5 times the PSF FWHM (corresponding to about 36 subpixels).In other words, on scales larger than 4.5 PSF widths, the raw and DPR images are essentially identical.It is only on scales smaller than 4.5 PSF widths that deviations between the two images begin to appear owing to the image sharpening induced by DPR.The image sharpening, which is inherently local, thus preserves intensities on this scale.If, in addition, the sample can be regarded as sparse, either by assumption or because of fluorophore intensity fluctuation statistics (imposed or passive), the enhanced capacity of DPR to distinguish fluorophores can help reduce the number of images required for SMLM-type superresolution.
Of course, no deblurring strategy is immune to noise, and the same is true for DPR.However, DPR presents the advantage that noise cannot be amplified as it can be, for example, with Wiener or RL deconvolution, both of which require some form of noise regularization (in the case of RL, iteration termination is equivalent to regularization).DPR requires neither regularization nor even an exact model of the PSF.As such, DPR resembles SRRF and MSSR, but with the advantage of simpler implementation and more general applicability to samples possessing extended features.
Finally, our DPR algorithm is made available here as a MATLAB function compatible with either Windows or MacOS.An example time to process a stack of 128 × 128 × 100 images obtained with a microscope whose PSF FWHM is 2 pixels (upscaled to 533 × 533 × 100) when run on an Intel i7-9800X computer equipped with an NVIDIA GeForce GTX 970 GPU is 2.32 s.A similar run time is found when using a MacBook Pro with Apple M1 Pro (2.08 s with MATLAB).
Because of its ease of use, speed, and versatility, we believe DPR can be of general utility to the bio-imaging community.estimated from the conventional Rayleigh resolution limit given as 55 δ ¼ 0.61λ em NA obj ; where λ em is the emission wavelength and NA obj is the NA of the objective.

Simulated Data
Simulated wide-field images of two point objects and two line objects separated by 160 nm were used to evaluate the separation accuracy of DPR using Gaussian PSF of standard deviation 84.93 nm.The images were rendered on a 40 nm grid.Poisson noise and Gaussian noise were added to simulate different SNRs, with SNR being calculated as A temporal stack of 45 image frames with independent noises was generated for each SNR.Simulated wide-field images of the sarcomere ground truth were produced based on a Gaussian PSF model of standard deviation 0.85 μm.Poisson noise and Gaussian noise were added to the images.A temporal stack of 45 image frames of images was then generated.

Engineered Heart Tissue Preparation
hiPSCs from the PGP1 parent line (derived from PGP1 donor from the Personal Genome Project) with an endogenous GFP tag on the sarcomere gene TTN 56 were maintained in mTeSR1 (StemCell) on Matrigel (Fisher) mixed 1:100 in DMEM/F-12 (Fisher) and split using accutase (Fisher) at 60% to 90% confluence.hiPSCs were differentiated into monolayer hiPSC-CMs by the Wnt signaling pathway. 57Once cells were beating, hiPSC-CMs were purified using RPMI no-glucose media (Fisher) with 4 mmol/L sodium DL lactate solution (Sigma) for 2 to 5 days.Following selection, the cells were replated and maintained in RPMI with 1:50 B-27 supplement (Fisher) on 10 μg∕mL fibronectin (Fisher)-coated plates until day 30.

Zebrafish Preparation
All procedures were approved by the Institutional Animal Care and Use Committee (IACUC) at Boston University, and practices were consistent with the Guide for the Care and Use of Laboratory Animals and the Animal Welfare Act.For the in vivo structural imaging of zebrafish, transgenic zebrafish embryos (isl2b:Gal4 UAS:Dendra) expressing GFP were maintained in filtered water from an aquarium at 28.5°C on a 14 to 10 h light-dark cycle.Zebrafish larvae at 9 days postfertilization (dpf) were used for imaging.The larvae were embedded in 5% low-melting-point agarose (Sigma) in a 55 mm petri dish.After agarose solidification, the petri dish was filled with filtered water from the aquarium.

hiPSC-CMTs Imaging
The hiPSC-CMTs were imaged with a custom confocal microscope, essentially identical to that described in Ref. 48, but with adjustable illumination NA (0.2 NA with 300 μm confocal pinhole for low-resolution imaging and 0.8 NA with 150 μm confocal pinhole for high-resolution imaging) and single-plane detection.The measured PSF FWHMs were 1.5 μm for lowresolution imaging and 0.5 μm for high-resolution using the 16× objective (Nikon CFI LWD Plan Fluorite Water 16×, 0.8 NA).Pixel sizes were 0.3 μm.

DPR, MSSR, and SRRF Parameters
The parameters used for DPR, MSSR, and SRRF for our results can be found in Table S2 in the Supplementary Material.

Error Map Calculation
Error map calculation is realized by a custom script written in MATLAB R2021b.Pixelated differences between the images and the ground truth are directly measured by subtraction and saved as an error map.

SSIM Calculation
The SSIM calculation is realized by the SSIM function in MATLAB R2021b.Exponents for luminance, contrast, and structural terms are set as [1, 1, 1] (default values).Standard deviation of the isotropic Gaussian function is set as 1.5 (default value).

Code, Data, and Materials Availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.The DPR MATLAB package is available on Github (https://github.com/biomicroscopy/DPR-Project).

Fig. 1
Fig. 1 Principle of DPR.(a) From left to right: simulations of Gaussian PSF intensity and gradient maps (amplitude and direction), pixel reassignments, deblurred PSF image after application of DPR.(b) DPR workflow.

Fig. 3 SMLM
Fig.3SMLM Challenge 2016.(a) DPR applied to each frame in raw image stack, followed by temporal mean or variance.(i) Raw image stack, (ii) mean of raw images,, (iii) DPR gain 1 followed by mean, (iv) DPR gain 2 followed by mean, (v) DPR gain 1 followed by variance, and (vi) DPR gain 2 followed by variance.Scale bar, 650 nm.(b) Expanded regions of interest (ROIs) indicated by green square in (a).Bottom left, intensity distribution along red line in ROIs.Bottom right, intensity distribution along green line in ROIs.Scale bar, 200 nm.(c) Image mean followed by DPR.(vii) gain 1, (viii) gain 2. Scale bar, 500 nm.(d) Expanded ROIs indicated by yellow squares in (c) and (ii), (iii), and (iv) in (a).Right, intensity distribution along cyan line in ROIs.Scale bar, 150 nm.PSF FWHM, 2.7 pixels.Local-minimum filter radius, 5 pixels.

Fig. 7
Fig. 7 Multi-z confocal zebrafish imaging.(a) In vivo raw and DPR-enhanced (gain 1) multiplane images of the brain region in a zebrafish 9 dpf.Left, four image planes.Right, merged with colors corresponding to depth.ROIs indicated by the red and yellow rectangles in (a) are shown in Fig. S8 in the Supplementary Material.Scale bar, 25 μm.(b) Raw and DPR-enhanced multiplane images of the zebrafish tail region.Scale bar, 25 μm.(c) ROI indicated by the cyan rectangle in (b), and intensity profile along the cyan dashed line.ROI indicated by the magenta rectangle in (b) is shown in Fig. S8 in the Supplementary Material.Scale bar, 10 μm.(d) Merged multiplane images in the tail region of a zebrafish.Left, raw low-resolution multiplane image.Middle, DPR-enhanced low-resolution multiplane image.Right, raw high-resolution multiplane image (different fish and tail regions).PSF FWHM, 5 pixels; local-minimum filter radius, 25 pixels.Scale bar, 20 μm.Planes 1 to 4, the deepest to the shallowest.Interplane separation, 20 μm.