Open Access
19 July 2016 Structural anisotropy quantification improves the final superresolution image of localization microscopy
Yina Wang, Zhen-li Huang
Author Affiliations +
Abstract
Superresolution localization microscopy initially produces a dataset of fluorophore coordinates instead of a conventional digital image. Therefore, superresolution localization microscopy requires additional data analysis to present a final superresolution image. However, methods of employing the structural information within the localization dataset to improve the data analysis performance remain poorly developed. Here, we quantify the structural information in a localization dataset using structural anisotropy, and propose to use it as a figure of merit for localization event filtering. With simulated as well as experimental data of a biological specimen, we demonstrate that exploring structural anisotropy has allowed us to obtain superresolution images with a much cleaner background.

1.

Introduction

Superresolution localization microscopy (hereafter referred to as localization microscopy), including (fluorescence) photo-activated localization microscopy1,2 and (direct) stochastic optical reconstruction microscopy,3,4 has become a powerful imaging tool to reveal the ultrastructures and understand the mechanisms behind cellular functions. In localization microscopy, a small subset of densely labeled fluorophores is switched “on” to obtain sparsely distributed emitters in a single frame, and the central position of each emitter is determined at nanometer precision by a proper localization algorithm, then these fluorophores are switched “off”; by repeating this process, the spatially densely labeled fluorophores are thus temporally isolated. After accumulating localization events from thousands of imaging frames, a final reconstructed superresolution image can be obtained with up to 10 times better spatial resolution, compared with conventional diffraction-limited fluorescence microscopy.5

From the principle of localization microscopy mentioned above, it is straightforward that localization microscopy does not produce a conventional digital image which comprises arrays of camera-recorded pixels with values representing the fluorescence intensity at those locations. Instead, the raw dataset in localization microscopy is a list of coordinates of the localized fluorophores (called localization events). Therefore, localization microscopy requires several data analysis steps to present a final superresolution image. First, because of the relatively long data acquisition time, sample drift must be corrected to guarantee a high spatial resolution.6,7 Second, since there is inherent background noise (originated from autofluorescence, out-of-focus fluorescence, or nonspecifically labeled fluorescent molecules) inside a superresolution image, these localization events need to be filtered out to provide a clean background.8,9 Third, due to the pointillist nature of the localization dataset, localization microscopy requires a critical image rendering step to translate the localization dataset into a final reconstructed image.10

A certain number of techniques for processing a localization dataset have been developed, where methods utilizing structural information usually outperform others. For example, using the inherent structural information within subsets of localization events imaged at different times, several highly accurate methods for sample drift correction have been proposed to overcome the shortcomings of introducing fiducial markers for drift correction.6 Moreover, benefiting from structural pattern average, nanometer11 or subnanometer localization precision12 has been achieved. However, methods for localization event filtering using structural information remain poorly explored.

In localization microscopy, background noise within the localization dataset usually appears as nonpolymeric localizations or nonspecific clusterings.9 First, due to autofluorescence, out-of-focus fluorescence or camera noise in data acquisition,1 nonpolymeric localizations with poor localization precision, and low localization density are usually observed. Second, nonspecific labeling is a common problem in localization microscopy and has been discussed in literature.13,14 More specifically, although antibodies usually have high specificity, there are still some reactions between antibodies and nonspecific antigens. Moreover, the fixation, blocking, washing in sample preparation procedures, and antibody concentration could all affect the degree of nonspecific labeling.9,13,14 Nonspecific labeling in localization microscopy usually appears as clusterings.9 In current methods for localization event filtering, the localization events associated with poor localization precision or low localization density are discarded,8,15 where the cutoff value is usually manually determined. This scheme is effective in filtering out backgrounds with nonpolymeric localization. However, they usually fail to filter out backgrounds with nonspecific clusterings, because the clusters usually exhibit normal localization precision and density.

In this paper, we present a new method for localization event filtering in localization microscopy. The method is based on the fact that most biological structures intrinsically exhibit anisotropy characteristic (hereafter called structural anisotropy). We demonstrate that structural anisotropy could naturally act as a metric in differentiating data of interest from background noise, thus providing a cleaner superresolution image.

2.

Methods

2.1.

Structural Anisotropy Quantification

At first, we quantify the structural anisotropy of a biological structure using anisotropy coefficient (denoted by g), which indicates the anisotropic strength of the local structure. In order to accurately calculate g, the local statistics of the localization events need to be explored to adaptively quantify local structure features. In structure-adaptive anisotropic filtering of magnetic resonance imaging (MRI),16 the local anisotropy of the underlying structure is exploited using the fact that locally orientated patterns have parallel level contours, leading to a cluster along a line in the corresponding power spectrum through the origin in the Fourier domain. Note that this cluster is perpendicular to the dominant spatial orientation. Here, we explore whether the same idea can be applied for quantifying the structural anisotropy in a localization dataset, noting that the signal levels in localization microscopy are much lower than those in MRI, although the datasets in localization microscopy and MRI images both generally provide high contrast of the underlying structures.

We start with histogram binning, f(x), of a localization dataset. First, the entire field-of-view associated with a localization dataset is divided into a set of spatial bins. Then the number of localizations that fall into each spatial bin is counted and used to assign intensity values to the corresponding bins. Note that the spatial bin size should be smaller than half of the finest structure features that need to be resolved (according to the Nyquist sample theorem). On the other hand, the spatial bin size should also be large enough so that each bin contains a reasonable number of localization points to maintain a sufficient signal-to-noise ratio in the final reconstructed image. In this paper, the spatial bin size is set to be 10 nm. However, because a histogram image is often noisy due to low signal-to-noise ratio per spatial bin, we preprocess the histogram image by blurring it with a radially symmetric Gaussian kernel whose standard deviation is usually set to be the same as the averaged localization precision.

In the reported anisotropic filtering technique in MRI,16 the anisotropy characteristics of a local structure, fΩ(x), are calculated by estimating the orientation direction that minimizes the second moment function in the Fourier domain. Note Ω is a squared local neighborhood of a spatial bin x=(x1,x2). In this paper, the size of Ω is twice the size of the object of interest for quantifying structure anisotropy and is set to be 100×100  nm2. Because, the second moment matrix R (a 2×2 matrix for two-dimensional images) of the power spectrum is symmetric, the second moment minimization problem can be solved by matrix eigenvalue. Therefore, in the implementation of structural anisotropy quantification, we first determine the second moment matrix R. The calculation of R can be mathematically simplified using the partial derivatives of fΩ(x) in the spatial domain based on the property of Fourier transform

Eq. (1)

Rij=14π2Ω(fΩxi)(fΩxj)dxidxj(i,j=1,2).

Second, we calculate the maximum and minimum eigenvalues of R, λmax, and λmin, which describe the anisotropic strength along and perpendicular to the dominant spatial orientation. Then the anisotropy coefficient, g, is defined with the following equation:

Eq. (2)

g(x)=(λmaxλminλmax+λmin)2.

The closer g is to 1, the stronger the anisotropic strength.

2.2.

Localization Event Filtering

Biological structures usually exhibit distinct morphology or molecule density distribution in different directions, thus present intrinsic anisotropy characteristics. On the other hand, background clusters in the same images are locally isotropic. Therefore, the anisotropy coefficient, g, could act as an effective metric for localization event filtering: the foreground would have a relatively high g value while the background would exhibit a very low g value. However, it is worth noting that some intersection points of the structures are also locally isotropic and would exhibit low anisotropic strength, indicating that additional analysis is necessary to preserve these intersection points from being filtered out. Moreover, we noticed that the local neighborhoods between the intersections with low anisotropic strength and the true background are different: the former is usually surrounded by or directly connected to foreground localizations events with high anisotropic strength, while the latter usually forms isolated clustering. Therefore, we can apply an average image filter to enhance the anisotropy strength of the intersections.

Here, we propose a new localization event filtering method based on structural anisotropy (SALEF), which is realized by the following procedures: (1) the anisotropy coefficient map (g map) of the localization dataset is established; (2) the g map is smoothed using an average filter; (3) the averaged g map is used to associate each localization event with a g value; and (4) localization events associated with a g value lower than a semiempirical threshold (discussed later) were identified as fluorescence background and thus discarded.

In this localization event filtering method, there are three parameters that can be adjusted: (1) the size of the local neighborhood, Ω, which is used to calculate the structural anisotropy; (2) the size of an average filter, which should be comparable to the object of interest and is set to be 50×50  nm2 in this study; and (3) the semiempirical threshold for identifying background. We recommend examining the g histogram of all localizations and then choosing an appropriate g value which can efficiently cut out a low g region (background) from others (foreground). In this way, we obtain a semiempirical threshold to 0.05 for this study.

2.3.

Simulated Dataset

We simulated two representative kinds of biological structures (filament and ring) to evaluate the effectiveness of our methods. First, we generated a ground-truth dataset with no localization error or background. The dataset consists of a realistic structure of eight microtubules and 30 ring structures where the radius ranges from 50 to 150 nm and the structure diameter is 25 nm. Note the filament structure was generated according the ground-truth information of an open training dataset with microtubule structures.17 The localization density, ρ, is set to be 100,000  molecule/μm2. Next, we generated localization datasets with various localization precision (σ) and localization density (ρ): a Poisson-distributed number of points were sampled from the ground-truth to generate a dataset with an average density equal to ρ; then these points were randomly displaced with a Gaussian probability density with a variance equal to σ. In each simulated dataset, we randomly distributed 200 background clusters over the whole field-of-view to represent false localization events due to background fluorescence. These clusters are Gaussian distributed clusters with a random FWHM between 10 and 20 nm. The density of background clusters are the same as the foreground.

2.4.

Experimental Dataset

The microtubule imaging experiments were performed with a home-built microscope consisting of an Olympus IX-71 inverted microscope, an oil immersion objective (Olympus UAPON 100XO, NA 1.40) and an electron multiplying charge coupled device (EMCCD) camera (Andor iXon 897). The activation laser at 405 nm and the excitation laser at 640 nm (both from CNI Laser, China) were combined before entering the microscope through a home-built illuminator. Data were acquired by the EMCCD camera and the bundled software. The pixel size at the sample plane is 160 nm. The dataset was analyzed with a maximum likelihood estimator, the MaLiang method.18 The sample drift is corrected.6

The sample was fixed BS-C-1 cells. The planted cells were first washed twice with warm phosphate buffered saline (PBS), fixed with warm paraformaldehyde (PFA) (4% in PBS, 10 min), washed three times with PBS, then permeabilized and blocked (0.5% Triton X-100, 3% bovine serum albumin in PBS, 30 min). Then, the cells were incubated with primary antibody (mouse anti-β-tubulin, Sigma-Aldrich, T8328) for 2 hours at room temperature. Later on, the cells were washed three times (10 min each, PBS) and incubated with secondary antibody (goat antimouse, labeled with Alexa Fluor 647, Invitrogen, A21235) for 1 h at room temperature, washed three times (10 min each, PBS) and postfixed (3% PFA and 0.1% gluteraldehyde in PBS, 10 min). Finally, the sample was washed three times (PBS) and stored at 4°C before imaging.

3.

Results and Discussion

First, we evaluate the performance of structural anisotropy quantification described above using an ideal simulated dataset. The test dataset comprises of three different types of structures: filament, ring, and randomly distributed cluster [Fig. 1(a)]. The former two types of anisotropic structures are referred to as foreground, while the clusters are referred to as background. We calculated the anisotropy coefficient of the dataset. As a result, the g map of the dataset [Fig. 1(b)] intuitively separates the background from the foreground, even when they are very close to each other [Figs. 1(a) and 1(b)]. The g histogram also differentiates the foreground from the background [Fig. 1(c)]. Additionally, when two filaments are nearly perpendicular to each other, we observe low anisotropy strength in the intersections [Fig. 1(a)].

Fig. 1

The performance of the adaptive structural anisotropy quantification in analyzing simulated ground-truth datesets. (a) A simulated ground-truth dataset containing filament and ring structures, as well as background clusters. (b) The anisotropy coefficient map (g map) of (a). The color map is shown in the lower right. (c) The histogram of g of the simulated ground-truth dataset. Scale bar: 500 nm in (a) and (b).

JBO_21_7_076011_f001.png

We further investigate the performance of the proposed method on imperfect simulated datasets with various localization precision (σ) and localization density (ρ). We generated localization datasets with σ ranging from 0 to 30 nm and ρ ranging from 5000 to 100,000  μm2. The simulation covers typical experimental scenarios. As shown in Figs. 2(a)2(b), although the performance of the structure differentiation slightly degrades with the decreases of σ and ρ, structural anisotropy effectively separates the background from the foreground under various conditions. For the dataset with a typical σ (15 nm) and ρ (50,000  μm2), our method exhibits a similar result as the ground-truth dataset [Figs. 2(c) and 2(e)]. In the scenario of low ρ [5000  μm2, see Figs. 2(d) and 2(f)], our method is still effective for differentiating a background with foreground, even though the absolute anisotropy strength of the foreground is lowered by the poor sampling density. These results indicate that structural anisotropy is an effective metric in describing biological structures in localization microscopy.

Fig. 2

The performance of adaptive structural anisotropy quantification in analyzing simulated datasets with various localization precision (σ) and localization density (ρ). (a) The averaged g value of the filament, ring, and background clusters as a function of σ under ρ=50,000  μm2. (b) The averaged g value of the same structures in (a) as a function of ρ under σ=15  nm. The error bar indicates the standard deviation values from five independent simulations. Close-up views of the datasets with ρ=50,000  μm2 (c) and ρ=5000  μm2 (d), respectively. Here, σ is 15 nm in both (c) and (d). (e) and (f) show the corresponding g maps of (c) and (d).

JBO_21_7_076011_f002.png

Then we use both simulated and experimental datasets to evaluate the performance of using structural anisotropy in localization event filtering. First, we use the imperfect simulated datasets with various σ and ρ (the same datasets used in Fig. 2) to quantify the performance of the SALEF method. The quantification is characterized by two parameters: detection rate and false-positive rate. The detection rate is the ratio between the real identified background localization event and the simulated background; while the false-positive rate is the ratio between the localization events that are falsely recognized as background and the total identified background localization events. We show that SALEF provides robust performances under various σ and ρ conditions [Figs. 3(a) and 3(b)]. The detection rate is larger than 80% for typical σ and ρ conditions. When ρ is lowered to 5000  μm2, the detection rate decreased to 60%. The false-position rate is lower than 10% for typical σ and ρ conditions and increases with the decrease of σ and ρ. Intuitively, for a typical dataset with σ=15  nm and ρ=50,000  μm2, SALEF provides a cleaner image than the original one [Figs. 3(c) and 3(d). Note that the same regions are shown in Figs. 1(c) and 1(d)]. Because of the average filtering, the localization events in the intersections of two perpendicular filaments, where low anisotropy strength had exhibited, are well preserved. However, the averaging process also has a side effect: a small fraction of the background located close to the foreground could not be filtered out. This side effect slightly reduces the detection rate.

Fig. 3

The performance of SALEF in analyzing simulated datasets. (a) The detection and false-positive rates as a function of σ in recognizing background events under ρ=50,000  μm2. (b) The detection and false-positive rates as a function of ρ in recognizing background events under σ=15  nm. (c) A close-up view of a simulated dataset with ρ=50,000  μm2 and σ=15  nm. (d) The resulting image of (c) after applying the SALEF method. The images are generated by histogram binning. Scale bar: 500 nm.

JBO_21_7_076011_f003.png

We further investigate the performance of SALEF using real experimental images of microtubule structures. In the microtubule dataset, a total number of 303,221 fluorophores were identified from a field-of-view of 419  μm2. The structure surface was estimated to be 8.87  μm2, thus the localization density was 34,000  molecules/μm2. The average localization precision of the datasets was calculated to be 12.7 nm using a theoretical equation19 which accounts for the excess noise in EMCCD camera.20,21

A reconstructed superresolution image from the dataset suffers from a nonspecific background which is randomly distributed over the field-of-view {Fig. 4[a(i)]}. By calculating the structural anisotropy coefficient from the image, we found that the structural anisotropy could effectively discriminate the foreground from the background [Figs. 4(a)4(c)]. Note that there is an abnormal peak in the g value between 0 and 0.05 [see the g histogram in Fig. 4(d)]. Therefore, we filter out the localization events whose g values are smaller than 0.05, and obtain a clean superresolution image [Figs. 4(a)4(c)]. On the contrary, if we use a popular density-based localization event filtering method, which discards 5% of the localization events with the lowest localization density, the background cannot be effectively filtered out [Figs. 4(a)4(c)]. Note the localization dataset in Fig. 4[a(i)] is filtered using a localization precision threshold of 50 nm. The image resulting from SALEF [Figs. 4(a)4(c)] presents a Fourier ring correlation (FRC) resolution22 of 56.1 nm, which is better than that of the original image {57.1 nm in Fig. 4[a(i)]}, but is lower than that from the density-based localization event filtering method (51.9 nm). The reason is that SALEF only filters out the background and preserves the foreground, while the density-based localization event filtering method filters out both low localization density components in the foreground and background. Note that the FRC resolution will become higher when there are fewer low localization density components.

Fig. 4

The performance of SALEF in analyzing experimental dataset. (a) The original localization dataset (i), its anisotropy coefficient map (ii), the reconstructed image after applying SALEF (iii) and the reconstructed image after density-based localization event filtering (vi), respectively. The color map of [a(ii)] is shown in the lower-right. (b) and (c) Close-up views of the boxed regions in each column of (a). (d) The g histogram for the image in [a(ii)]. (e) The FRC resolution curves of the datasets in (a). The estimated FRC resolution is: 57.1 nm [a(i)], 56.1 nm [a(iii)] and 51.9 nm [a(vi)], respectively. All the images are histogram binning images of the corresponding localization dataset. Scale bar: 5  μm in (a); 500 nm in (b); 250 nm in (c).

JBO_21_7_076011_f004.png

4.

Conclusion

We investigated the structural anisotropy characteristics of the datasets in superresolution localization microscopy, and presented a method for localization event filtering. Using both simulated and experimental images, we verified that SALEF, our new localization event filtering method based on structural anisotropy, is able to provide a much cleaner superresolution image than previous localization filtering methods.

However, the method presented in this study is not applicable to isotropic structures, e.g., the cell membrane proteins that are organized in small and radially symmetric clusters, simply because there is no structural anisotropy difference between the background and the foreground. In this case, other image processing techniques, including segmentation23 and cluster analysis,24 can be used.

Acknowledgments

This work was supported by National Basic Research Program of China (Grant No. 2015CB352003), National Natural Science Foundation of China (Grant Nos. 91332103 and 81427801), the Program for New Century Excellent Talents in University of China (Grant No. NCET-10-0407), and the Science Fund for Creative Research Group of China (Grant No. 61421064). We appreciate Dr. Zhe Hu for providing the experimental dataset.

References

1. 

E. Betzig et al., “Imaging intracellular fluorescent proteins at nanometer resolution,” Science, 313 (5793), 1642 –1645 (2006). http://dx.doi.org/10.1126/science.1127344 SCIEAS 0036-8075 Google Scholar

2. 

S. T. Hess, T. P. K. Girirajan and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J., 91 (11), 4258 –4272 (2006). http://dx.doi.org/10.1529/biophysj.106.091116 BIOJAU 0006-3495 Google Scholar

3. 

M. Heilemann et al., “Subdiffraction-resolution fluorescence imaging with conventional fluorescent probes,” Angew. Chem.-Int. Edit., 47 (33), 6172 –6176 (2008). http://dx.doi.org/10.1002/anie.200802376 Google Scholar

4. 

M. J. Rust, M. Bates and X. W. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods, 3 (10), 793 –795 (2006). http://dx.doi.org/10.1038/nmeth929 1548-7091 Google Scholar

5. 

G. Sluder and D. Wolf, Digital Microscopy, 4th Ed.Academic Press(2013). Google Scholar

6. 

Y. N. Wang et al., “Localization events-based sample drift correction for localization microscopy with redundant cross-correlation algorithm,” Opt. Express, 22 (13), 15982 –15991 (2014). http://dx.doi.org/10.1364/oe.22.015982 OPEXFF 1094-4087 Google Scholar

7. 

T. Zhao et al., “A user-friendly two-color super-resolution localization microscope,” Opt. Express, 23 (2), 1879 –1887 (2015). http://dx.doi.org/10.1364/oe.23.001879 OPEXFF 1094-4087 Google Scholar

8. 

S. A. Jones et al., “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods, 8 (6), 499 –505 (2011). http://dx.doi.org/10.1038/nmeth.1605 1548-7091 Google Scholar

9. 

D. R. Whelan and T. D. M. Bell, “Image artifacts in single molecule localization microscopy: why optimization of sample preparation protocols matters,” Sci. Rep., 5 7924 (2015). http://dx.doi.org/10.1038/srep07924 SRCEC3 2045-2322 Google Scholar

10. 

R. P. J. Nieuwenhuizen, S. Stallinga, B. Rieger, “Visualization and resolution in localization microscopy,” Cell Membrane Nanodomains: From Biochemistry to Nanoscopy, 409 –430 CRC Press(2014). http://dx.doi.org/10.1201/b17634-23 Google Scholar

11. 

A. Löschberger et al., “Correlative super-resolution fluorescence and electron microscopy of the nuclear pore complex with molecular resolution,” J. Cell Sci., 127 (20), 4351 –4355 (2014). http://dx.doi.org/10.1242/jcs.156620 JNCSAI 0021-9533 Google Scholar

12. 

A. Szymborska et al., “Nuclear Pore Scaffold Structure Analyzed by Super-Resolution Microscopy and Particle Averaging,” Science, 341 (6146), 655 –658 (2013). http://dx.doi.org/10.1126/science.1240672 SCIEAS 0036-8075 Google Scholar

13. 

R. A. John, T. R. Stephen and W. D. Michael, “Single molecule localization microscopy for superresolution,” J. Opt., 15 (9), 094001 (2013). http://dx.doi.org/10.1088/2040-8978/15/9/094001 Google Scholar

14. 

J. R. Allen, S. T. Ross and M. W. Davidson, “Sample preparation for single molecule localization microscopy,” Phys. Chem. Chem. Phys., 15 (43), 18771 –18783 (2013). http://dx.doi.org/10.1039/c3cp53719f PPCPFQ 1463-9076 Google Scholar

15. 

T. Pengo, S. J. Holden and S. Manley, “PALMsiever: a tool to turn raw data into results for single-molecule localization microscopy,” Bioinformatics, 31 (5), 797 –798 (2015). http://dx.doi.org/10.1093/bioinformatics/btu720 BOINFP 1367-4803 Google Scholar

16. 

G. Z. Yang et al., “Structure adaptive anisotropic image filtering,” Image Vision Comput., 14 (2), 135 –145 (1996). http://dx.doi.org/10.1016/0262-8856(95)01047-5 Google Scholar

17. 

D. Sage et al., “Quantitative evaluation of software packages for single-molecule localization microscopy,” Nat. Methods, 12 (8), 717 –724 (2015). http://dx.doi.org/10.1038/nmeth.3442 1548-7091 Google Scholar

18. 

T. W. Quan et al., “Ultra-fast, high-precision image analysis for localization-based super resolution microscopy,” Opt. Express, 18 (11), 11867 –11876 (2010). http://dx.doi.org/10.1364/oe.18.011867 OPEXFF 1094-4087 Google Scholar

19. 

T. Quan, S. Zeng and Z.-L. Huang, “Localization capability and limitation of electron-multiplying charge-coupled, scientific complementary metal-oxide semiconductor, and charge-coupled devices for superresolution imaging,” J. Biomed. Opt., 15 (6), 066005 (2010). http://dx.doi.org/10.1117/1.3505017 JBOPFO 1083-3668 Google Scholar

20. 

K. I. Mortensen et al., “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods, 7 (5), 377 –381 (2010). http://dx.doi.org/10.1038/nmeth.1447 1548-7091 Google Scholar

21. 

B. Rieger and S. Stallinga, “The lateral and axial localization uncertainty in super-resolution light microscopy,” Chem. Phys. Chem., 15 (4), 664 –670 (2014). http://dx.doi.org/10.1002/cphc.201300711 Google Scholar

22. 

R. P. J. Nieuwenhuizen et al., “Measuring image resolution in optical nanoscopy,” Nat. Methods, 10 (6), 557 –562 (2013). http://dx.doi.org/10.1038/nmeth.2448 1548-7091 Google Scholar

23. 

F. Levet et al., “SR-Tesseler: a method to segment and quantify localization-based super-resolution microscopy data,” Nat. Methods, 12 (11), 1065 –1071 (2015). http://dx.doi.org/10.1038/nmeth.3579 1548-7091 Google Scholar

24. 

P. Rubin-Delanchy et al., “Bayesian cluster identification in single-molecule localization microscopy data,” Nat. Methods, 12 (11), 1072 –1076 (2015). http://dx.doi.org/10.1038/nmeth.3612 1548-7091 Google Scholar

Biography

Yina Wang received her MS degree in bioinformation technology at Huazhong University of Science and Technology in 2011 and now is a PhD student in biomedical engineering at the same university under the supervision of Dr. Zhen-Li Huang. Her doctoral thesis focuses on the development of new image analysis methods for superresolution localization microscopy. She was a recipient of the China National Scholarship, and was selected to attend the 65th Lindau Nobel Laureate Meeting.

Zhen-li Huang joined Huazhong University of Science and Technology in 2003 and now is a professor in Biomedical Engineering and Optical Engineering. He received his BSc degree in chemistry from Nankai University and PhD in optics from Zhongshan University. He obtained his postdoctoral training at the University of Central Florida, and recently worked as a visiting professor at the University of Colorado, Boulder. He is focusing on the development and applications of superresolution localization microscopy.

© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2016/$25.00 © 2016 SPIE
Yina Wang and Zhen-li Huang "Structural anisotropy quantification improves the final superresolution image of localization microscopy," Journal of Biomedical Optics 21(7), 076011 (19 July 2016). https://doi.org/10.1117/1.JBO.21.7.076011
Published: 19 July 2016
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Anisotropy

Microscopy

Super resolution

Computer simulations

Image filtering

Luminescence

Cameras

Back to Top