Accelerating 3D single-molecule localization microscopy using blind sparse inpainting

Abstract. Significance: Single-molecule localization-based super-resolution microscopy has enabled the imaging of microscopic objects beyond the diffraction limit. However, this technique is limited by the requirements of imaging an extremely large number of frames of biological samples to generate a super-resolution image, thus requiring a longer acquisition time. Additionally, the processing of such a large image sequence leads to longer data processing time. Therefore, accelerating image acquisition and processing in single-molecule localization microscopy (SMLM) has been of perennial interest. Aim: To accelerate three-dimensional (3D) SMLM imaging by leveraging a computational approach without compromising the resolution. Approach: We used blind sparse inpainting to reconstruct high-density 3D images from low-density ones. The low-density images are generated using much fewer frames than usually needed, thus requiring a shorter acquisition and processing time. Therefore, our technique will accelerate 3D SMLM without changing the existing standard SMLM hardware system and labeling protocol. Results: The performance of the blind sparse inpainting was evaluated on both simulation and experimental datasets. Superior reconstruction results of 3D SMLM images using up to 10-fold fewer frames in simulation and up to 50-fold fewer frames in experimental data were achieved. Conclusions: We demonstrate the feasibility of fast 3D SMLM imaging leveraging a computational approach to reduce the number of acquired frames. We anticipate our technique will enable future real-time live-cell 3D imaging to investigate complex nanoscopic biological structures and their functions.

1 Introduction 3,4 and other variants [5][6][7][8] have extended the imaging resolution of conventional optical fluorescence microscopy beyond the diffraction limit (∼250 nm). In these methods, a random and sparse subset of fluorophores in the sample is imaged in each diffraction-limited image frame, whereas a large number of such frames are obtained sequentially. Then, the detected individual fluorophores in each frame are precisely localized, and finally, all the localization positions from these frames are assembled together to generate the super-resolution image. Three-dimensional (3D) SMLM 9-16 requires additional axial (z axis) information, which is obtained by using z dependent point spread function (PSF). 17 Optically engineered PSFs such as astigmatic, 9 double-helix, 18 biplane, 19 interferometric, 20 airy-beam, 21 and tetrapod 12 are commonly used in existing 3D SMLM imaging to encode the axial information of blinking fluorescent molecules. PSFs shapes are generally engineered via the introduction of optical elements such as cylindrical lens, 9 phase mask, 22 or deformable mirror 15 in the imaging pathway of the microscope. In both 2D and 3D SMLM imaging, to achieve sufficient dense localizations to reveal biological samples' details, a large number of sequential diffraction-limited frames (typically >10 4 ) are needed, suggesting a long acquisition time. Such slow imaging makes potential live-cell and high-throughput imaging more challenging. Practically, the acquisition of such long frame sequences also results in the degradation of image quality due to the dyes' photobleaching. Furthermore, the processing of such a large number of image frames requires considerable processing times. 23 Therefore, a faster SMLM technique is always desirable.
Several approaches have been reported to accelerate the imaging and data processing time of SMLM. One of them is to increase the fluorophore blinking kinetics using a high-power laser and to use a high-speed camera (with higher frames per second) to capture those fast blinking single-molecule events. 10, 24 Huang et al. 25 achieved video-rate SMLM using scientific complementary metal-oxide-semiconductor (sCMOS) cameras. These acceleration methods provide faster imaging at the cost of image quality degradation. 10,26 Specifically, high-excitation laser intensity and fast detection decreased the photon count per localization, resulting in deterioration of localization precision and resolution. 26 Another approach is to increase the number of active fluorophores per frame. 27,28 However, the high activation density causes fluorescent spots to overlap in the diffraction-limited images, making it more difficult to precisely localize the fluorophores. 28 Despite this challenge, most of the existing techniques 29-31 use higher molecular density per frame to increase the imaging speed. Recently, deep learning has been used to accelerate the SMLM methods. Typically, deep learning is implemented to precisely localize the 2D or 3D position (or color separation in case of multicolor imaging) of blinking single-molecules PSFs in each frame. [32][33][34][35][36][37][38][39] These methods ultimately accelerate the data processing time of SMLM methods, but still require a large number of frames. Further, deep learning is leveraged by Ouyang et al. 40 to accelerate 2D SMLM and by Gaire et al. 41 to accelerate 2D multicolor spectroscopic SMLM using very few frames. However, the limitation of a deep learning method is that it requires a large quantity of training data with similar structures.
Here, we present a computational approach to accelerate 3D SMLM imaging. The experimental setup, data acquisition procedure, and localization methods remain the same as those of standard 3D SMLM methods, except that very few diffraction-limited frames are acquired, which will reduce the acquisition time and ultimately accelerate imaging speed. Further, the data processing time will also be reduced accordingly. For the standard 3D SMLM method, the final image rendered from very few frames is sparse and provides less information to extract the biological sample's fine structures. Our approach can recover those unresolved structures in the sparse image with low emitter densities and reconstruct the high-quality 3D super-resolution image. The high-density estimation of 2D SMLM imaging using the blind sparse inpainting has been previously reported in detail. 42 Here, we extended it to accelerate 3D SMLM imaging by introducing a sparsifying transform appropriate for the 3D structure. In our previous work, high-density 2D SMLM images were reconstructed by solving an l 1 minimization problem using the alternating direction method of multipliers (ADMM) 43 with curvelet transform 44 as the sparsifying transform. Here, we also use ADMM but with combined curvelet transform and an additional total variation (TV) regularization for the depth direction. We confirm the efficacy of the proposed algorithm using both simulated and experimental 3D SMLM datasets. The preliminary results of this article were reported in Ref. 45. This expanded article includes additional simulation, experimental, and quantitative evaluation results and their analysis.

Reconstruction Approach
In standard 3D SMLM, a large number of diffraction-limited frames (suppose N frames) are imaged with a total acquisition time of NΔt, where Δt is the time to acquire a single frame (typically 10 to 30 ms) and processed to produce a high-density 3D super-resolution image. A smaller number of frames (suppose Q frames and Q ≪ N) with a very short acquisition time of QΔt will generate a low-density 3D image (Fig. 1). Our goal is to reduce the acquisition time by reconstructing the high-density 3D super-resolution image using a low-density 3D image acquired using fewer frames, which is sparse and incomplete. For reconstruction, we need to restore the unknown fluorophore localization positions based on the available fluorophore localization points on the low-density 3D image. Thus, the restoration problem can be formulated as an image inpainting task aiming to restore the mission regions of the corrupted image and reconstruct the original image.
Mathematically, the relationship between the vectorized low-density 3D image x Q from the localization emitters acquired in Q frames and the desired high-density 3D image vector x can be modeled as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 5 4 8 (1) where P Q is a diagonal matrix with either element 1, for the acquired location or 0, for the missing location. To solve Eq. (1), we first need to estimate the unknown measurement matrix P Q (called "blind") based on the low-density 3D image and then reconstruct x from x Q . The estimation of P Q is challenging in the sense that a zero-valued pixel in x Q can be background without any fluorophore or those with fluorophore but not detected in the acquired Q frames. The locations of fluorescence molecules captured in Q frames are determined by performing hard-thresholding on the low-density image x Q . After P Q is obtained, x can be estimated from x Q , which is still nontrivial because of infinite possible solutions. Prior information has to be exploited as a constraint to obtain a unique reconstruction with good fidelity to the true structures. Here, we reconstruct the desired high-density 3D image by employing sparseness as an image prior. Specifically, the high-density 3D image is reconstructed by solving the following unconstrained minimization problem E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 3 6 8 where k · k 1 and k · k 2 represent l 1 and l 2 norms, respectively, λ 1 and λ 2 are the weight parameter and regularization parameter, respectively, Φ represents a sparsifying transform, and TVð·Þ is a total variation regularization. The first term enforces data consistency, the second term Fig. 1 Comparison of blind sparse inpainting method with the existing 3D SMLM method. 3D super-resolution image in standard SMLM is obtained by imaging and processing a large number of diffraction-limited single-molecule frames (suppose N frames). The proposed method uses very few diffraction-limited frames (suppose Q frames and Q ≪ N), which results in a low-density 3D image. The high-density 3D image is then reconstructed using blind sparse inpainting.
enforces the sparsity in the transform domain, and the third term promotes the piecewise smoothness of the image. The choice of sparsifying transform depends on the image content and plays a crucial role in image reconstruction. Many biological structures, such as microtubules, are of anisotropic curvelike nature. Therefore, we use the curvelet transform as a sparsifying transform in the lateral plane. It provides sparsity and excellent directional sensitivity and anisotropy. Thus, curvelet transform can efficiently characterize anisotropic features such as curves, edges, and arcs. 46 The discrete curvelet transform was implemented using CurveLab 47 with curvelets via wrapping approach. It includes four steps: 2D fast Fourier transform (FFT) , windowing, frequency wrapping, and 2D inverse FFT. 44 TV regularization is used in the depth direction only. TV is defined as TVðxÞ ¼ kGxk 1 , where G is the first-order finite-difference operator along the depth direction and k · k 1 denotes l 1 norm. More detail about the optimization algorithm is in the next section.

Optimization Algorithm
The convex optimization problem of Eq. (2) is a standard l 1 minimization problem. It can be solved using efficient approaches such as variable splitting and augmented Lagrangian method (ALM). 48,49 In this paper, we are using a specific variation of ALM called ADMM. 43 We first introduce the auxiliary variable d ¼ Φx and e ¼ Gx in Eq. (2) to decouple the l 1 terms from other parts and obtain the following equivalent form E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 4 7 7 The scaled form of the augmented Lagrangian function (ALF) of Eq. (3) can be written as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 6 ; 4 1 5 where u and v are Lagrangian multipliers representing scaled dual variables. Similarly, ρ and μ are the penalty parameters. The ADMM iteration scheme will be E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 3 3 5 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 2 8 4 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 6 ; 2 5 4 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 1 1 6 ; 2 2 3 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 The x-subproblem has a closed-form solution E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 1 1 6 ; 1 7 7 The superscripts H and T denote the Hermitian transpose and the transpose of a matrix, respectively. The optimum values of d-subproblem and e-subproblem are obtained through the element-wise shrinkage operator 48 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 1 1 6 ; 1 0 6  (2) E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 1 1 6 ; 4 3 0 where shrinkð:Þ is defined as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 3 ; 1 1 6 ; 3 8 2 The algorithm terminates when the predefined maximum number of iteration is reached. The proposed ADMM optimization algorithm for blind sparse inpainting is summarized in Algorithm 1. The algorithm was implemented in MATLAB ® R2018a.
All the parameters in our implementation were tuned heuristically, and the best results obtained from the quantitative evaluations are presented. In general, the weight parameter λ 1 balances the sparsity constraint/smoothness and data consistency. Typically, smaller λ 1 weights the smoother image, while large λ 1 penalizes data consistency more (preserving more acquired information). Such control of sparsity constraint and data consistency in the lateral direction is also affected by the value of ρ. Similarly, the smoothness and data consistency in the axial direction is also controlled by the parameter μ. Due to the variation of intensity and density in each image, a single value of these parameters may not work for all images. To simplify the parametertuning process of all images, the maximum intensity was truncated to 255, and then intensity values were rescaled to the interval of [0, 1]. In our implementation, we used the value of λ 1 in the range of 10 to 80, and ρ in the range of 10 to 150. Similarly, λ 2 ¼ 10 −6 and μ ¼ 0.1∕0.01 were used. The results are insensitive to a small change in the values of these parameters.

Simulation Results
To demonstrate the performance of blind sparse inpainting reconstruction, we used two sets of simulated localization data.
For the first one, we generated a simulated 3D SMLM image in the shape of a knot as the "ground-truth" specimen. The knot had a volume of dimension 4.02 × 4.02 × 0.18 μm 3 . The localization list was simulated by randomly selecting some locations in the knot to mimic the activated molecules with an activation density of approximately ten molecules per frame (0.62 molecules∕μm 2 per frame). 16 We directly recorded the localized coordinates ðx; y; zÞ and their intensities of blinking molecules in each camera frame. Since the localization emitters were directly obtained from the true image, there were no localization errors or background noise. The localization list was then used to render the 3D image. The increasing density images can be synthesized by combining these localization points using an increasing number of frames. The resulting high-density super-resolution 3D image (Video 1) has lateral and axial resolutions of ∼20 and ∼17 nm, respectively [Figs. 2(e) and 2(g)]. We used fewer frames to generate the lowdensity 3D image and then applied our algorithm to reconstruct the high-density 3D image.
To reconstruct the high-density 3D image from the low-density image, we constructed 22 z-slices of the low-density 3D image by grouping the localization data in the z axis with thickness 8 nm. ThunderSTORM, 50 an open-source SMLM data analysis plugin for Fiji, 51 was used to computationally render the z-stack with the 3D simulated localization list as an input. Due to simultaneous reconstruction of multiple z-slices (lateral and axial direction), the reconstruction of the 3D SMLM image is much more complicated compared to the reconstruction of the 2D SMLM image as in Ref. 42. The result in Fig. 2(b) shows that the blind sparse inpainting reconstruction of the low-density image rendered with Q ¼ 1000 frames and having 15,910 localization points significantly improved the density and is visually equivalent to the ground-truth [ Fig. 2(c)] rendered with N ¼ 10;000 frames with a total of 96,203 fluorophore localization points. The 3D projection of Figs. 2(a)-2(c) is presented in Video 2. Additionally, the volume visualization of the simulated low-density, blind-inpainting reconstruction, and ground-truth 3D images using the Volume Viewer 52 plugin in Fiji is shown in Fig. 3. Most of the incomplete and The FWHM values of the reconstructed image, for both lateral and axial directions, are similar (≈2.5 nm higher) to those of the ground-truth image, indicating the inpainted reconstruction is able to preserve the resolution of a 3D structure. Additionally, we perform the quantitative evaluation of the reconstruction by calculating the root-mean-squared error (RMSE) between the reconstructed image and the ground-truth image and it is shown in Fig. 4. The RMSE values for each reconstruction are the average RMSE values from all the slices. Since the localization list was generated randomly, we conducted 10 simulations and calculated the average RMSE of each reconstruction for the different number of frames. The unit of the RMSE is the same as the intensity (photons) of the image. The curve [ Fig. 4] shows significant improvements in the reconstruction with >800 frames, suggesting that increasing frames improve the fidelity of reconstructed structures. The RMSE value for the reconstruction of Fig. 2 using 1000 frames was 0.0748. Fig. 3 Volume visualization of the simulated 3D image. (a) Low-density; (b) reconstructed; and (c) ground-truth images, respectively. The low-density image was rendered using 1000 frames and the ground-truth image was obtained using 10,000 frames. In the experimental condition, localization microscopy images are corrupted by noise sources such as false detection from the background noise due to unbound or out of focus light or unspecific binding of antibodies. 40 To test our method's performance for realistic simulation conditions, we used publicly available realistic 3D simulation data of microtubules from the École Polytechnique Fédérale de Lausanne (EPFL) 3D SMLM software Benchmarking. 53 The Alexa 647 labeled STORM data "MT0.N1.LD" consists of 19,996 frames with a molecule density of 0.2 molecules per μm 2 . We adopted the 3D-Double Helix datasets and used SMLocalizer 54 to process diffraction-limited frames. Once the localization list was obtained, we used 5000 frames to generate the low-density 3D image, as shown in Fig. 5(a). To reconstruct the 3D high-density image from the low-density image, we constructed 90 z-slices of the low-density 3D image by grouping the localization data in the z axis with a thickness of 12.5 nm. The field of view (FOV) of the images in Fig. 5 was 5.62 × 5.15 μm 2 . The overall axial range was 1.125 μm. Figure 5(b) shows reconstruction using 5000 frames, having much smoother and improved density in both lateral and axial directions. The result is comparable to the high-density image rendered using all frames [ Fig. 5(c)]. The ground-truth image is shown in Fig. 5(d). The RMSE values (average of all slices) of the low-density, reconstructed, and high-density images were 0.0167, 0.0144, and 0.0202, respectively. The RMSE values show that our reconstruction has much less deviation from the ground-truth image than the high-density image obtained using 19,996 frames.

Experimental Results
To demonstrate the performance of blind sparse inpainting reconstruction for real 3D SMLM images, we used publicly available localization lists of two microtubules image data and one mitochondrial image data.
The first data set was from the EPFL SMLM software benchmarking. 53 The details about sample preparation and microscopy setup of the data can be found in Ref. 55. In brief, microtubules in U-2 OS cells were labeled with anti-alpha tubulin primary and Alexa Fluor 647coupled secondary antibodies. The diffraction-limited frames (with an exposure time of 15 ms) were imaged using the optical setup of dSTORM with a cylindrical lens. We used the wobble and drift corrected "Tubulin-A647-3D" localization list obtained from 112,683 frames and processed using Super-resolution Microscopy Analysis Platform (SMAP)-2018. 56 Since the localization list was already available, we did not process the diffraction-limited frame data, but instead directly used them. The isolated localization points due to background noise were filtered using density filtering. When all 112,683 frames with about 1.7 million localization points were used, we obtained a high-density super-resolution 3D image as a reference image [ Fig. 6(c)]. The low-density image [ Fig. 6(a)] was synthesized using 2254 frames, i.e., 50-fold fewer frames, with about 34 thousand localization points from the same localization list data. To reconstruct the 3D high-density image from the low-density image, we constructed 23 z-slices of the lowdensity 3D image with FOV of 37.5 × 33.4 μm 2 by grouping the localization list data in z axis with a thickness of 40 nm. The overall axial range was 920 nm. The microtubules filaments could be seen in the low-density image, but structural details were hard to discern. To reconstruct the high-density 3D image, our blind sparse inpainting algorithm was applied to the low-density 3D image. The reconstructed image is shown in Fig. 6(b). The color in Figs. 6(a)-6(c) indicates the depth in the z direction. Visual observation shows that blind sparse inpainting reconstruction significantly improves the localization density of the low-density image. The microtubules filament structures are much denser and more clearly revealed in the reconstruction. Additionally, reconstruction for a region of interest (ROI) (12 × 12 μm 2 ) of the same data set with much smaller pixel size (24 nm) and z-slice width (Δz ¼ 25 nm) is shown in Fig. 7. The superior reconstruction result shows much denser and smoother microtubules structures in both lateral and axial directions.
For the quantitative evaluation of the reconstructed images of experimental data, we used the multiscale structural similarity index (MS-SSIM), 57 a perceptually motivated metric, between the reference high-density image and the reconstructed image. Since the ground-truth was not available for the experimental data, the high-density 3D images rendered with all available frames were used as reference images. It is also worth noting that this reference high-density image still might deviate from the ground-truth (as seen in Sec. 4.1). Thus, the RMSE with reference image is not a proper metric for the quantitative evaluation of reconstruction as the pixel value difference can be large even for perfect reconstruction. 42 Thus, we used MS-SSIM to evaluate the reconstruction capability to capture the structural information along with the slices in the reference image of experimental data sets. The MS-SSIM index has a scale between 0 and 1, with 1 being a perfect match with the reference image. The higher MS-SSIM value indicates a better match of structural information. Figure 8 shows the improvement in the MS-SSIM index of the  slices of the reconstructed 3D image [ Fig. 6(b)] compared to that of the input low-density 3D image [ Fig. 6(a)]. It demonstrates that our method is capable of recovering the structures of microtubules with high similarities to the reference high-density image. The MS-SSIM index of the edge slices (slices 1 to 3, and 21 to 23) are still low because of having very low localization densities with a wide gap between the fluorophore localization in those slices.
To further evaluate the blind sparse inpainting reconstruction for 3D SMLM experimental data, we used another publicly available microtubule localization list result from Zernike Optimized Localization Approach in 3D (ZOLA-3D). 58 Details about sample preparation, imaging setup, and processing steps can be found in Ref. 13. In brief, the microtubules in a U-373 MG cell were labeled with anti-alpha tubulin primary and Alexa-647 conjugated secondary antibodies. A total of 87,959 frames were acquired using the saddle point PSF with a variable exposure time of 30 (for the early stage) to 100 ms (in the later stage). Since the localization list was already available, we directly used them. The isolated localization points due to background noise were filtered using density filtering. The high-density 3D super-resolution image [ Fig. 9(c)] was generated using all frames with around 899,600 localization points, visualizing the whole cell with an axial range of 2.3 μm. The low-density 3D image [ Fig. 9(a)] was generated using 4400 frames, i.e., 20 fold fewer frames, with approximately 57,500 localization points from the same localization data. For reconstruction, we constructed 46 z-slices of the low-density image by grouping the localization data in the z axis with a size of 50 nm with an FOV of 51.58 × 37.62 μm 2 . Then, the low-density image was given as an input to our blind sparse inpainting algorithm. The reconstructed image is shown in Fig. 9(b). The color in  Figs. 9(a)-9(c) indicates the depth in the z direction. Microtubule structures are more clearly revealed in reconstruction with much higher-localization densities comparable to the reference high-density image. Superior reconstructions in the edge of the cell can be observed in the reconstruction. The improvement in the MS-SSIM index, as shown in Fig. 10, also verifies higher similarities with the high-density reference image after the reconstruction. However, some fine features in the high-density image with the dense or close-by structure were not appropriately resolved (red arrow) due to more isolated localization data in those regions.
Similarly, we also evaluate the reconstruction of another 3D SMLM image from ZOLA-3D. The 3D mitochondrial image in COS7 Cells was obtained using saddle point PSF. The highdensity 3D image of Fig. 11(c) was rendered using 81,578 frames (≈175;000 localizations after density filtering). For reconstruction, we used 5500 frames (≈19;500 localizations) to generate the low-density 3D image [ Fig. 11(a)]. The reconstructed 3D image in Fig. 11(b) shows improvement in the density of the mitochondrial structures both in lateral and axial directions. Due to the tubular structure of the mitochondria, the curvelet transform performed well to give superior reconstruction. The result demonstrates the versatility of our method to reconstruct high-quality 3D super-resolution images by reducing the number of frames. Fig. 10 The plot of the MS-SSIM index versus z-slices for comparing the reconstruction of microtubules structures for the data from ZOLA-3D of Fig. 9.

Conclusion
We present a computational method based on blind sparse inpainting to reconstruct the highdensity 3D images using the low-density 3D images synthesized using very few camera frames obtained from the standard 3D SMLM data. We demonstrate high-quality reconstructions with up to a 10-fold reduction in the number of frames in the simulated 3D SMLM images and up to 50-fold reduction for experimental microtubules 3D SMLM images. Thus, the acquisition time is reduced considerably using fewer camera frames, and the 3D imaging is accelerated without compromising resolution. Furthermore, no change in the existing optical setup or labeling protocol is needed. Additionally, our method can be applied to any of the existing localization algorithms. We expect that our method can offer further improvement in the acquisition time by integrating with the existing higher molecular density labeling methods.
However, the proposed method has several limitations. First, because of the use of the curvelet transform, it may be restricted to the filament structures such as microtubules. For noncurvature structures, appropriate sparsifying transform, such as wavelet transform, can be used. Second, the reconstruction also depends on the localization algorithms. If there are some artefacts due to background noise or incorrect localizations, such artefacts propagate during the reconstructions. Third, when the input image quality is limited due to scarcity of the localization points or increased noise or nonuniform localizations, the reconstructed images may misrepresent the actual structures (e.g., broken structures). Such misrepresentation can be alleviated by improving the input image quality using more frames, but at the cost of reduced acceleration. Finally, since missing localization positions are estimated blindly, there may be some errors in predicting the P Q , which may give some artefacts or loss of resolution. Again, such limitations can also be alleviated by using more frames data. We anticipate combining super-resolution optical microscopy and our blind inpainting method will enable future real-time live-cell and high-throughput 3D imaging to investigate the complex nanoscopic biological structures and their functions.

Disclosures
The authors declare no conflicts of interest.