Open Access
27 June 2022 Semiautomated analysis of an optical ATP indicator in neurons
Taher Dehkharghanian, Arsalan Hashemiaghdam, Ghazaleh Ashrafi
Author Affiliations +
Abstract

Significance: The firefly enzyme luciferase has been used in a wide range of biological assays, including bioluminescence imaging of adenosine triphosphate (ATP). The biosensor Syn-ATP utilizes subcellular targeting of luciferase to nerve terminals for optical measurement of ATP in this compartment. Manual analysis of Syn-ATP signals is challenging due to signal heterogeneity and cellular motion in long imaging sessions. Here, we have leveraged machine learning tools to develop a method for analysis of bioluminescence images.

Aim: Our goal was to create a semiautomated pipeline for analysis of bioluminescence imaging to improve measurements of ATP content in nerve terminals.

Approach: We developed an image analysis pipeline that applies machine learning toolkits to distinguish neurons from background signals and excludes neural cell bodies, while also incorporating user input.

Results: Side-by-side comparison of manual and semiautomated image analysis demonstrated that the latter improves precision and accuracy of ATP measurements.

Conclusions: Our method streamlines data analysis and reduces user-introduced bias, thus enhancing the reproducibility and reliability of quantitative ATP imaging in nerve terminals.

1.

Introduction

In nature, living organisms as diverse as bacteria, fireflies, copepods, and sea pansies produce natural light through bioluminescence.1 These organisms express the enzyme luciferase that emits visible light when it catalyzes the oxidation of its substrate luciferin powered by adenosine triphosphate (ATP). Bioluminescence imaging is a sensitive technique that relies on the detection of light emitted from the luciferase reaction.2 At saturating luciferin concentrations, luciferase light emission is proportional to ATP level, thus luciferase can be used as a sensitive cellular ATP sensor. Similar to fluorescence, electrons are excited to a higher energy level and emit photons as they return to their resting level.3 However, the excitation energy in bioluminescence is provided by the chemical reaction rather than by exogenous illumination, as in fluorescence [Fig. 1(a)]. As a result, bioluminescence does not suffer from photobleaching of excited molecules or phototoxicity. In fluorescence imaging, biological materials emit significant endogenous fluorescence signals (autofluorescence), particularly in the green emission range. In contrast, bioluminescence does not suffer from this limitation as autoluminescence of most cells is negligibly low.4 Therefore, any signal that is reliably detected can be attributed to luminescence rather than background noise. As such, bioluminescence imaging, despite dimmer signals, is ideally suited for sensitive assays of biological activity.

Fig. 1

Bioluminescence imaging of cytosolic ATP in nerve terminals. (a) The bioluminescence chemical reaction in which the enzyme luciferase uses luciferin and ATP to produce light denoted as hν. (b) Schematic of a hippocampal nerve terminal expressing Syn-ATP in which luciferase is anchored to synaptic vesicles with synaptophysin (physin) and mCherry is used as an inert fluorophore. (c) An optimized dual fluorescence and luminescence microscopy setup (bottom) where a long-pass 590-nm filter replaces an emission filter to maximize luminescence photon collection (top). (d) Representative luminescence and mCherry fluorescence images of a hippocampal neuron (top) and an axon bearing several nerve terminals (bottom). Scale bar, 30  μm.

NPh_9_4_041410_f001.png

The first practical application of bioluminescence was the development of a reporter for gene expression using the North American firefly (Photinus pyralis) luciferase, which emits yellow-green light (emission peak at 557 nm).5,6 Since then, mutant bioluminescent reporters emitting red light (>600  nm) have been engineered to improve tissue penetration for in vivo bioluminescence imaging.7 Further modifications to luciferase thermostability and catalytic activity have led to the development of ATP sensors for monitoring subcellular ATP levels.8 Capitalizing on these improvements, a presynaptic ATP sensor, “Syn-ATP,” was developed to monitor ATP levels in the nerve terminals of cultured hippocampal neurons, particularly to investigate the energetic demands of electrical activity.9 Syn-ATP is a genetically encoded optical reporter of ATP, available on Addgene (plasmid # 51819; RRID: Addgene_51819) in which luciferase is targeted to synaptic vesicles through fusion with synaptophysin and additionally tagged with the fluorophore mCherry to normalize for reporter expression level [Fig. 1(b)].

Since its development, imaging data from Syn-ATP assays have been analyzed manually with the software Image J and Microsoft Excel. In this method, In the method, regions of interest (ROIs) corresponding to individual nerve terminals are individually selected, followed by calculation of signal intensity of fluorescence and luminescence over multiple time frames. Raw signal intensities are then subjected to background subtraction. Background-corrected luminescence intensities of individual terminals are then normalized to mCherry fluorescence to correct for variability in Syn-ATP expression and/or changes in the focal plane during imaging. To optimize performance, users need to select the maximal number of mCherry-positive terminals in a recorded field while excluding cell bodies or large cellular clumps. As described above, manual analysis is undesirable because it is time-consuming and subject to user bias. In addition, cellular motion and frame-to-frame movement of the selected ROIs in lengthy (several minutes) experiments complicates data analysis. To address the limitations of manual analysis, we have developed a semiautomated analysis pipeline based on machine-learning algorithms that measures Syn-ATP luminescence signals in individual neurons with appropriate background correction and normalization to fluorescence signals.

2.

Materials and Methods

All animal experiments were performed using wild-type rats of the Sprague-Dawley strain in accordance with protocols approved by the IACUC at Washington University School of Medicine in St. Louis. Hippocampi were dissected from 0 to 2 days-old neonatal rats of a mixed (male and female) litter, dissociated, and plated on poly-ornithine coated coverslips as previously described.10 Hippocampal neurons at DIV 14-20 were mounted in a laminar flow perfusion chamber, maintained at 37°C with an OkoLab stage top incubator in Tyrode’s buffer containing (in mM) 119 NaCl, 2.5 KCl, 2 CaCl2, 2 MgCl2, 50 HEPES (pH 7.4), 2 D-Luciferin potassium salt (Gold Biotechnology), 1.25 lactate and 1.25 pyruvate, supplemented with 10  μM 6-cyano-7nitroquinoxalibe-2, 3-dione (CNQX), and 50  μM DL-2-amino-5phosphonovaleric acid (APV) to inhibit postsynaptic responses. Live imaging of the hippocampal neurons was performed on a custom-built, inverted Olympus IX83 epifluorescence microscope equipped for luminescence and fluorescence imaging. Fluorescence excitation of mCherry was achieved with the TTL-controlled Cy3 channel of a Lumencor Aura III light engine. Both mCherry emission and Syn-ATP luminescence were directed through a Chroma 590-nm long-pass filter and an Olympus UPlan Fluorite 40X 1.3 NA objective. Image acquisition was performed with an Andor iXon Ultra 897 camera, cooled to 95°C to minimize the camera detection noise. Platinum-iridium electrodes were used to evoke action potentials with 1 ms electrical pulses creating field potentials of 10  V/cm. In each experiment, data were collected from at least 10 coverslips from three independent cultures prepared from separate litters. Unless otherwise indicated, all chemicals were obtained from Sigma-Aldrich. Image analysis was performed with Image J Time Series Analyzer (manual analysis) or the proposed semiautomated algorithm coded in python programming language, using Jupyter notebook, and scikit-learn machine learning library.11,12 Data visualization and statistical analysis were performed in GraphPad Prism v9.0.

3.

Results

Hippocampal neurons expressing the Syn-ATP sensor were imaged for mCherry fluorescence using Cy3 excitation light (10 seconds at 2 Hz frame cycle, camera exposure: 20 ms), followed by a single luminescence frame collected with exposure time of 60 s. Emission light from luminescence and fluorescence were directed through a 590-nm long-pass filter, instead of a conventional emission filter, to maximize luminescence photon collection [Fig. 1(c)]. The alternating acquisition of fluorescence and luminescence images was repeated throughout the experiment to constitute multiple timepoints. The 20-frame fluorescence movie was averaged into a single image, hereafter referred to as the fluorescence image [Fig. 1(d)].

Given that a large and labeled dataset of bioluminescent cells was not publicly available, we decided to implement unsupervised machine learning algorithms for analysis of Syn-ATP images rather than supervised algorithms for detection of individual nerve terminals. Our analysis pipeline is composed of three main steps: (1) background detection, (2) cell body detection, and (3) signal estimation. To improve consistency, user input is requested to modify the model’s output for several steps of the process. Our images have two channels (mCherry fluorescence and luminescence) and we used the luminescence channel for background and cell body detection due to its low background signal. We then applied onto fluorescence images the background and cell body masks obtained from the luminescence channel. Details of this semiautomated image analysis pipeline and evaluation of its performance are outlined below.

3.1.

Development of a Semiautomated Analysis Pipeline

The first step of the proposed pipeline was to distinguish neurons from the background. The luminescence frame was used as the input image and is downsampled from its original size of 512×512  pixels to 128×128  pixels. Since the real pixel size of this camera is 16.4  μm, with a 40× objective lens, nerve terminals are about 1.5  μm (4  pixels) in diameter. Therefore, downsampling of images helps to reduce the likelihood of misidentification of noise as nerve terminals. The resultant image was then divided into regions of 4×4  pixels, which corresponds to 6.4×6.4  μm squares. Each region is then represented by its median pixel value.

In the next step, pixel intensities were transformed into a one-dimensional vector of size 16,384 (128×128). K-means clustering from the scikit-learn python library was applied to this vector to produce two clusters, one corresponding to the image background, and the other corresponding (to the ROIs.)12,13 Since K-means is an unsupervised machine learning algorithm, it does not require any training prior to application. We observed that this clustering scheme accurately distinguished background from ROIs [Fig. 2(a)]. We used luminescence images to create a background mask and applied the same mask on fluorescence images [Fig. 2(b)]. Background signal in each channel was then calculated as the average intensity of the background pixels.

Fig. 2

An image analysis pipeline for background signal determination and cell body removal. (a) The luminescence image of a neuron was downsampled from 512×512  pixels to 128×128  pixels. K-means clustering algorithm was implemented on pixel values to produce two complementary clusters of background and desired signals. A background mask was applied to remove background signals from the image (black and white panel). Next, the region with the highest total signal intensity was detected and deemed as the cell body. Both background and cell body were removed from further analysis. (b) Background and cell body masks generated from the luminescence image were applied to the fluorescence image.

NPh_9_4_041410_f002.png

Syn-ATP is first synthesized in neural cell bodies where it enters the secretory pathway for trafficking to terminals. However, inclusion of Syn-ATP data from the cell body may be confounding because ATP metabolism in this compartment may differ from nerve terminals. Therefore, the second step in our algorithm was to detect the neural cell body and exclude it from further analysis. In this step, the user provides input to set the custom width (d) of the cell body. Alternatively, the default preset value is set at 32 pixels, which corresponds to 50  μm in our setup. A square of size d×d  pixels was then moved one pixel at a time, and the mean pixel intensity was calculated for each region. The region with the highest mean intensity was defined as the cell body [Fig. 2(a)]. Since neural cell bodies have variable shapes and sizes, and a cell body is not captured in some images, the user is asked to provide their input determining whether the cell body is accurately detected. Otherwise, the user has the option to manually set a custom-sized bounding box to cover the cell body. Once detected, the cell body was masked, as was done for the background, and omitted from further analysis in both luminescence and fluorescence images.

The final step in our pipeline was the determination of signal intensity from masked images. The total intensity of pixels in each of the luminescence and fluorescence images was calculated, followed by subtraction of background intensity to determine net luminescence and fluorescence intensities. This process was iterated for each imaging time point. The L/F value was plotted against time to represent the ATP content of nerve terminals in a single neuron over time:

LF=(LuminescenceBackground)(FluorescenceBackground).

Luminescence or fluorescence signals that are too close to the detection limit of the camera are unreliable. Therefore, the pipeline provides a quality control check for the reported signals where the check is passed if values considered as signal by k-means clustering are at least three times higher than the variance of the dark current from EMCCD cameras. Since this is driven by the specific properties of the camera used in experiments, it can be modified by the user.

It is important to note that while L/F is directly proportional to ATP concentration in nerve terminals, the absolute L/F value is dependent on image acquisition parameters that may need to be modified during a project. For modifications that produce a linear change in signal intensity, such as adjustments to camera electron-multiplying (EM) gain settings, we introduced a simple true/false argument to adjust calculations of ATP levels by a coefficient factor.

3.2.

Performance Evaluation

To evaluate the performance of our analysis pipeline, raw fluorescence and luminescence images from neural samples (n=26 neurons) were analyzed both manually and with our semiautomated tool. The experiment consisted of four baseline 1-min time points, followed by 1 min of electrical stimulation at 10 Hz applied between time points 4 and 5. Neurons were imaged for three additional time points after stimulation. The reduction in L/F values [Fig. 3(a)] during electrical stimulation was previously attributed to acidification of the cytosol which reduces the enzymatic activity of luciferase given that its pKa (7.03) is close to the cytosolic pKa (6.8). Indeed, both manual and semiautomated analysis methods detect a decline in L/F values during activity which can be fully corrected by taking into account the pH effects using previously described correction factors [Fig. 3(b)].9

Fig. 3

Quantitative comparison of Syn-ATP image analysis by manual and semiautomated methods. Hippocampal neurons expressing Syn-ATP (n=26) were imaged for 8 min and were electrically stimulated for 1 min at 10 Hz frequency (crimson bar). (a) Average traces of Syn-ATP luminescence normalized by fluorescence intensity (L/F) analyzed by manual and semiautomated methods (n=52 neurons). (b) L/F traces were corrected for cytosolic pH changes that occur during electrical stimulation. (c) Baseline prestimulation L/F values obtained from semiautomated analysis were significantly higher than manual analysis (paired t-test: p=0.0006). (d) Semiautomated analysis yielded higher background fluorescence values than manual analysis (paired Wilcoxon test: p=0.002, n=10 neurons) while not affecting luminescence background determination (paired Wilcoxon test: p=0.084, n=10 neurons). (e) Measurement variability of prestimulus L/F values was determined as % deviation from the mean of each neuron (ΔL/F), indicating lower variability with semiautomated analysis (paired t-test: p=0.001, n=52 data points). (f) Measurement validity of the semiautomated method was assessed by comparing z-scores of two population of control and mutant neurons with different baseline L/F values, indicating significantly lower z-scores for the mutant (unpaired t-test, p=0.001, ctrl=32 neurons, mutant = 10 neurons).

NPh_9_4_041410_f003.png

Compared to manual analysis, our semiautomatic method generated higher L/F values [Fig. 3(c)]. We speculated this was at least partly due to the more precise determination of background signals through our clustering algorithm. In fact, we examined this hypothesis with a subset of randomly selected neurons from our dataset and demonstrated that background fluorescence values calculated with our semiautomated pipeline were 20% higher than manual analysis. The mean intensity for manual and semiautomated were 1120±142 and 921±87 units, respectively (p-value=0.002) [Fig. 3(d)]. In contrast, background luminescence values were not significantly different in the two methods (p-value=0.08) [Fig. 3(d)]. The higher fluorescence background values would result in lower background-subtracted F values when using the semiautomated pipeline thus raising the L/F compared with manual measurements as we observed [Fig. 3(c)].

A major challenge in manual analysis of Syn-ATP is the high variability in L/F measurements of the same neuron over time. To determine whether semiautomated yields more consistent values, we compared the variability of L/F measurements obtained from our semiautomated pipeline to manual analysis. In our experiments, the time points prior to stimulation represent baseline ATP levels with minimal biological variation. We calculated the L/F values of the initial three timepoints using both methods and determined the measurement variability (ΔL/F) as % deviation from the mean baseline L/F value for each neuron. The semiautomated approach significantly decreased measurement variabilities (p-value=0.001) thus increasing the reliability of Syn-ATP analysis. The variation among the three baseline data points across cells (n=52) was 3.4±0.9% and 6.8±0.6% in semiautomated and manual analysis, respectively [Fig. 3(e)]. These findings demonstrate that our pipeline improves measurement consistency by minimizing user sampling bias.

We then sought to assess the validity of the semiautomated pipeline in correctly identifying differences in L/F levels in distinct neural populations. Syn-ATP experiments were performed in a population of control neurons and neurons carrying a mitochondrial mutation that impairs mitochondrial ATP production. Following determination of baseline L/F values as in Fig. 3(c), the two populations were combined, mean and standard deviation of baseline L/F values were calculated, and the z-score of each neuron was determined. Comparison of z-scores for control and mutant neurons revealed significantly lower z-scores in the mutant (control: 0.027±0.17, mutant: 1.81±0.17, p-value=0.001) Therefore, we conclude that our semiautomated method successfully distinguishes differences in Syn-ATP L/F values of distinct genotypes [Fig. 3(f)].

4.

Discussions and Conclusions

Syn-ATP represents a powerful and robust application of bioluminescence imaging for measurement of ATP levels in nerve terminals. However, analysis of Syn-ATP data has been challenging due to the potential for user selection bias. Standardizing image analysis through advanced computational methods, particularly machine learning, would improve data accuracy and reproducibility. Here, we developed a semiautomated pipeline that facilitates the analysis of dual fluorescence and luminescence images. First, our pipeline enables the user to determine background signals in both channels in an unbiased manner. Second, it enables the user to mask specified regions such as the cell body of a neuron. In imaging sessions that last for several minutes, axonal movement shifts the position of nerve terminals in the field of view and poses challenges for manual tracking of several data points in an image stack. Our approach circumvents this problem by analyzing the entire image rather than individual user-selected ROIs representing nerve terminals.

Despite its advantages, we acknowledge that our semiautomated approach has its own limitations. For example, sudden changes in the background signal interfere with the code and reduce the reliability of the results. Furthermore, user input is required to validate selection of the cell body. Our future endeavors with expanded datasets would be directed toward full automation of the code as well as resolving technical issues that arise from faulty image acquisition.

In summary, we have developed a semiautomated pipeline for analysis of dual fluorescence/luminescence imaging of ATP in nerve terminals. Our pipeline analyzes signals in the entire field of view thereby reducing user sampling error. It also standardizes background signal measurement and reduces variability in measurement of ATP level in nerve terminals.

The code is publicly available on the GitHub repository: https://github.com/ashrafilab/SynATP-Analysis and can be modified for customized bioluminescence image analysis.

Disclosure

The authors declare no competing interests.

Acknowledgments

This study was supported by the McDonnell Center for Cellular and Molecular Neuroscience Small Grants Program and the Klingenstein-Simons Fellowship Award in neuroscience. The authors would like to thank Javid Dadashkarimi for proofreading the manuscript, and Marissa Laramie for preparation of neural cultures. Figure 1 was prepared using the BioRender software (Biorender.com). T.D. and A.H. share cofirst authorship. G.A., A.H., and T.D. conceptualized, and designed the method, and wrote the manuscript. T.D. wrote the code in Python. G.A. and A.H. performed experiments and collected the data. G.A. is the corresponding author and supervised this study.

5.

Code, Data, and Materials Availability

The image analysis code is available at Ashrafi Lab’s GitHub repository. ( https://github.com/ashrafilab/SynATP-Analysis).

References

1. 

T. Wilson, “Woodland hastings,” J. Biolumin. Annu. Rev. Cell Dev. Biol., 14 197 –230 (1998). https://doi.org/10.1146/annurev.cellbio.14.1.197 Google Scholar

2. 

D. K. Welsh and T. Noguchi, “Cellular bioluminescence imaging,” Cold Spring Harb. Protoc., 2012 (8), pdb.top070607 (2012). https://doi.org/10.1101/pdb.top070607 Google Scholar

3. 

C. E. Badr and B. A. Tannous, “Bioluminescence imaging: progress and applications,” Trends Biotechnol., 29 (12), 624 –633 (2011). https://doi.org/10.1016/j.tibtech.2011.06.010 TRBIDM 0167-7799 Google Scholar

4. 

T. Troy et al., “Quantitative comparison of the sensitivity of detection of fluorescent and bioluminescent reporters in animal models,” Mol. Imaging, 3 (1), 9 –23 (2004). https://doi.org/10.1162/153535004773861688 Google Scholar

5. 

J. R. de Wet et al., “Firefly luciferase gene: structure and expression in mammalian cells,” Mol. Cell Biol., 7 (2), 725 –737 (1987). https://doi.org/10.1128/mcb.7.2.725-737.1987 MCBDEU Google Scholar

6. 

B. R. Branchini et al., “Thermostable red and green light-producing firefly luciferase mutants for bioluminescent reporter applications,” Anal. Biochem., 361 (2), 253 –262 (2007). https://doi.org/10.1016/j.ab.2006.10.043 ANBCA2 0003-2697 Google Scholar

7. 

B. R. Branchini et al., “Red-emitting luciferases for bioluminescence reporter and imaging applications,” Anal. Biochem., 396 (2), 290 –297 (2010). https://doi.org/10.1016/j.ab.2009.09.009 ANBCA2 0003-2697 Google Scholar

8. 

P. Jain et al., “Bioluminescence microscopy as a method to measure single cell androgen receptor activity heterogeneous responses to antiandrogens,” Sci. Rep., 6 33968 (2016). https://doi.org/10.1038/srep33968 Google Scholar

9. 

V. Rangaraju, N. Calloway and T. A. Ryan, “Activity-driven local ATP synthesis is required for synaptic function,” Cell, 156 (4), 825 –835 (2014). https://doi.org/10.1016/j.cell.2013.12.042 CELLB5 0092-8674 Google Scholar

10. 

T. A. Ryan, “Inhibitors of myosin light chain kinase block synaptic vesicle pool mobilization during action potential firing,” J. Neurosci., 19 (4), 1317 –1323 (1999). https://doi.org/10.1523/JNEUROSCI.19-04-01317.1999 JNRSDS 0270-6474 Google Scholar

11. 

T. Kluyver et al., “Jupyter notebooks-a publishing format for reproducible computational workflows,” Positioning and Power in Academic Publishing: Players, Agents and Agendas, 87 –90 IOS Press(2016). Google Scholar

12. 

F. Pedregosa et al., “Scikit-learn: machine learning in Python,” J. Mach. Learn. Res., 12 2825 –2830 (2011). https://doi.org/10.5555/1953048.2078195 Google Scholar

13. 

S. Lloyd, “Least squares quantization in PCM,” IEEE Trans. Inf. Theory, 28 (2), 129 –137 (1982). https://doi.org/10.1109/TIT.1982.1056489 IETTAW 0018-9448 Google Scholar

Biography

Taher Dehkharghanian is a post-doctoral research fellow at McMaster University. He earned his MD from the Tehran University of Medical Sciences in 2016. He did his masters in computer science at OntarioTech university. Currently, he is doing research on the intersections of Artificial Intelligence and medical image analysis. His research interest is in AI ethics, particularly in explainable and interpretable AI.

Arsalan Hashemiaghdam completed his degree in Medicine from Tehran University of Medical Sciences and continued his research endeavors at Massachusetts General Hospital where he studied the role of ER stress in glioblastoma. Afterward, he moved to Yale University to investigate the role of microglia in neurodegeneration. He joined the Ashrafi lab in 2020 to study regulatory mechanisms of mitochondrial ATP production in nerve terminals. He is currently pursuing clinical residency in Neurology at Tufts MC.

Ghazaleh Ashrafi is an assistant professor in the Department of Cell Biology and Physiology, and Genetics at Washington University School of Medicine in St. Louis and her laboratory studies energy metabolism in nerve terminals. She received her PhD from Harvard University investigating the role of Parkinson’s disease-related genes in axonal mitochondrial trafficking and turnover. In her postdoctoral fellowship at Weill Cornell Medicine, she studied how glycolytic and mitochondrial energy production is regulated in firing synapses.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Taher Dehkharghanian, Arsalan Hashemiaghdam, and Ghazaleh Ashrafi "Semiautomated analysis of an optical ATP indicator in neurons," Neurophotonics 9(4), 041410 (27 June 2022). https://doi.org/10.1117/1.NPh.9.4.041410
Received: 5 October 2021; Accepted: 3 June 2022; Published: 27 June 2022
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Luminescence

Acquisition tracking and pointing

Neurons

Nerve

Bioluminescence

Image analysis

Biological research

Back to Top