Significance: Cerebral blood flow is an important biomarker of brain health and function as it regulates the delivery of oxygen and substrates to tissue and the removal of metabolic waste products. Moreover, blood flow changes in specific areas of the brain are correlated with neuronal activity in those areas. Diffuse correlation spectroscopy (DCS) is a promising noninvasive optical technique for monitoring cerebral blood flow and for measuring cortex functional activation tasks. However, the current state-of-the-art DCS adoption is hindered by a trade-off between sensitivity to the cortex and signal-to-noise ratio (SNR).
Aim: We aim to develop a scalable method that increases the sensitivity of DCS instruments.
Approach: We report on a multispeckle DCS (mDCS) approach that is based on a 1024-pixel single-photon avalanche diode (SPAD) camera. Our approach is scalable to >100,000 independent speckle measurements since large-pixel-count SPAD cameras are becoming available, owing to the investments in LiDAR technology for automotive and augmented reality applications.
Results: We demonstrated a 32-fold increase in SNR with respect to traditional single-speckle DCS.
Conclusion: A mDCS system that is based on a SPAD camera serves as a scalable method toward high-sensitivity DCS measurements, thus enabling both high sensitivity to the cortex and high SNR.
Measuring cerebral blood flow noninvasively and with high sensitivity is critical for clinical applications such as measuring the oxygen metabolic rate1,2 and monitoring intracranial pressure.3,4 Furthermore, although neuroscience applications such as functional activation mapping5,6 and noninvasive brain–computer interface7,8 have been pursued primarily using functional magnetic resonance imaging and near-infrared spectroscopy (fNIRS), such applications could in principle benefit from functional cerebral blood flow measurements.9–11 Diffuse correlation spectroscopy (DCS)12 is a promising noninvasive optical technique for monitoring cerebral blood flow13,14 and for measuring cortex functional activation during finger tapping9 and visual stimulation10,11 tasks. DCS measures deep-tissue dynamics by coupling coherent light into the subject and measuring the fluctuations in the speckle field created by the light diffusing out of the subject.12,15,16 Increasing the source–detector separation () of DCS optodes increases the proportion of detected photons that travel beneath the scalp and skull, deep into the brain cortex.17–19 However, the increase in sensitivity to deep tissue comes at the expense of a decrease in the signal-to-noise ratio (SNR) of the measured signal. Indeed, fewer diffuse photons reach the detector when the detector is farther away from the source due to absorption, scattering, and radial spread from the source. To increase the brain sensitivity of DCS, we developed multispeckle DCS (mDCS), a method that can extend without sacrificing SNR by performing thousands of independent speckle measurements in parallel.
When we inject coherent light into a dynamic scattering medium such as those shown in Figs. 1(a) and 1(e), a dynamic speckle pattern emerges as shown in Fig. 1(b). DCS estimates the speed of the scatterers by measuring the correlation time constant of the speckles (), which depends on the speed of the scatterers.16 We can estimate by measuring the intensity of a speckle versus time and by calculating the corresponding intensity autocorrelation function (), as shown in Figs. 1(c) and 1(d).
Based on the correlation noise model,21,22 we can increase the SNR of the curve [defined as the ratio between mean and standard deviation of calculated over several integration periods] by increasing the detected photon count rate () and the integration time (the time over which the intensity of the speckle field was recorded for each calculation, ). However, is limited by the time scales of the dynamics we need to measure (for example, should be to measure pulsatility), and is limited by the skin and eye maximum permissible exposure (MPE) to laser radiation.23 Therefore, to increase SNR we need to consider parameters other than and .
Traditional DCS systems employ single-mode fibers (SMFs) to couple one speckle onto a single-photon detector. There have been attempts to increase using a multimode fiber (MMF) to couple multiple speckles onto the same detector.21,24,25 Measuring multiple speckles on a single detector indeed increases linearly with the number of detected speckles, but at the same time it decreases the magnitude of () by the same factor. Under typical experimental conditions for DCS, the increase in and the decrease in compensate each other, so measuring multiple speckles with the same detector does not improve the SNR of (see Sec. 3.2 for more details on the SNR of DCS measurements). Therefore, we will henceforth refer to the above systems (one speckle/one detector and multiple speckles/one detector) as single-speckle DCS systems. mDCS is a new promising method to improve on the SNR of DCS through multiple independent measurements of the speckle field. The mDCS technique has been pursued in various ways using 28 stand-alone single-photon avalanche diodes (SPADs),10,26 8 stand-alone SPADs,27 4 pixels of a SPAD array,28 and interferometric near-infrared spectroscopy detecting 20 speckles with a CMOS camera.29 While these prior approaches are important proofs of principle of mDCS, the largest improvement in SNR reported so far is an SNR gain (defined as the ratio between the SNR achieved with mDCS and with single-speckle DCS using detectors with the same performance) of 5.26 Here, we report on a method to perform 1024 independent DCS measurements using a kilopixel SPAD camera, which provides an SNR gain of 32. Our technique allows for DCS measurements with 36 times more speckles and approximately six times higher SNR gain with respect to the current state-of-the-art for mDCS. Our technique is to 32 times more sensitive than single-speckle DCS using state-of-the-art high efficiency ( at 780 nm) single photon detectors, depending on the count rate. With the ongoing development for megapixel SPAD technology,30,31 our approach offers a scalable implementation toward mDCS instruments with times more speckles and times higher SNR gain than the current state-of-the-art and times higher SNR gain than single-speckle DCS.
Materials and Methods
Figures 1(a) and 1(e) show our experimental setup. We coupled a 785-nm wavelength CW laser to the scattering media with an SMF. We collected the light diffusing out of the scattering media with an MMF coupled to a SPAD camera (PF32, Photon-Force). The scattering media were a diffuser plate to calibrate the diameter of the speckles [Fig. 1(a)] and a liquid phantom to measure the SNR gain [Fig. 1(e)] (see Appendix A for details on the phantom setup).
To ensure that each pixel of the SPAD camera provided an independent measurement of the speckle field, we adjusted the average diameter of the speckles on the SPAD camera [, see Fig. 1(b)] to be equal to or smaller than the size of the pixels of the SPAD camera (, see Appendix A for details on the SPAD camera). The average diameter of a speckle obeys the following relationship:32Fig. 1(a) (see Appendixes B and C for details on the speckle diameter calibration). Based on Eq. (1), the diameter does not depend on the particular medium used, as long as and are fixed.
We measured the SNR gain of our mDCS system using a liquid phantom in a reflection geometry, as shown in Fig. 1(e), to better mimic the dynamics and optical properties of human tissue. The fiber-SPAD distance was , which corresponds to . For the SNR gain measurements, the SPAD camera recorded the dynamics of the speckle field with a frame exposure time of for up to frames (8 s).
Calculating the Intensity Autocorrelation Functions
We calculated the intensity autocorrelation function as a function of time lag for each pixel () of the camera:33,34 We calculated the ensemble average of the curves measured over pixels as
Results and Discussion
SNR Gain using a SPAD Camera
Figure 2(a) shows a representative curve, and Fig. 2(b) shows the ensemble average measured over all of the pixels, . We measured with and repeated the measurement for up to 8 s, for a total of 160 measurements per pixel. We calculated the time average and standard deviation of over all of the 160 integration periods. [Fig. 2(a)] was times larger than the [Fig. 2(b)] for the same . Figure 2(c) shows the measured for increasing (solid circles) and the SNR calculated using our mDCS noise model (to be discussed below) with no fitting parameters (solid lines). The SNR increased with increasing ensemble size, which is consistent with the prediction of our model. We estimated the SNR gain by calculating the ratio for the first bin (). Figure 2(d) shows that the measured SNR gain (solid circles) increased with the number of pixels as (solid line) and reached a maximum value of 32 when we used all of the 1024 pixels of the camera, in agreement with our prediction.
Multispeckle DCS Noise Model
Zhou et al.21,22 reported on a noise model for single-speckle DCS that allows for estimating the SNR of from several experimental parameters that can be measured independently. To evaluate our findings quantitatively, we extended the noise model to mDCS by assuming that each speckle of the speckle field is an independent realization of the same ergodic random process (see below for details of the model). For ergodic random processes, ensemble and time statistics are the same, so observing one speckle for is equivalent to observing speckles for . Our model predicts that the SNR of the first bin () is proportional to in the shot-noise limited regime, in which most in vivo DCS measurements operate (see below for details of the noise components of the speckle field).
Under the assumption of speckle ergodicity, we extended the single-speckle DCS noise model21,22 to mDCS by including the number of independent speckle field measurements () alongside the integration time () in the expression for :Appendix E for more details on the functional forms of ].
We write Eq. (4) in the following form:Fig. 3(b).
The mDCS noise model is based on the assumption that the dominant source of noise is the intrinsic statistical temporal fluctuation of the speckle field, which is due to shot noise and speckle noise. Thus, the validity of the noise model is limited to measurements with negligible extrinsic noise sources, such as dark counts from the detector, after-pulsing, pixel cross-talk, and background photons. Depending on the relative magnitudes of extrinsic and intrinsic noises, increasing the number of speckles coupled to a pixel () may have different effects on the SNR of . If shot noise is dominant, measuring multiple speckles on a single detector increases linearly with the number of detected speckles, but at the same time it decreases by the same factor. Since these two effects approximately compensate each other, measuring multiple speckles with the same detector does not improve the SNR of . If speckle noise is dominant, the number of speckles coupled to a pixel does not affect the SNR because the SNR does not depend on or . Finally, if extrinsic noise is dominant, coupling multiple speckles to a pixel may increase the SNR by increasing .
Since in brain tissue is in the range of tens of , which becomes shorter for larger source–detector distances ,26 most in vivo DCS measurements are likely to operate in the shot-noise limited regime. For instance, for , , , and , the threshold count rate is , which is over an order of magnitude larger than the count rates reported for DCS measurements on an adult human head at ( speckle).26
To support our ergodicity assumption and validate our model experimentally, we compared the SNR of and at for increasing or . Figure 3(a) shows the SNR versus with different , 32, and 1024. Increasing the ensemble size by 32 times from 1 to 32 at (see arrow from point A to point B) resulted in the same SNR gain that would be achieved with one pixel and 32 times longer (see arrow from point A to point C), as predicted by our model. This result supports our ergodicity hypothesis that pixel ensemble averaging and time averaging result in the same SNR gain. Similarly, the SNR versus curves in Fig. 3(b) show that increasing the ensemble size by 878 times is equivalent to increasing the count rate of one pixel by times. The solid lines in Fig. 3 are predictions based on our model with no fitting parameters.
The measurements shown in Fig. 3(b) also illustrate how our ensemble averaging technique is applicable to photon-starved applications, such as the detection of deep-tissue dynamics. Figure 3(b) shows that the SNR measured with only one pixel and detected photons per (point D) is comparable to the SNR measured by averaging over 878 pixels and as low as detected photons per pixel per (point E). mDCS reached the predicted SNR gain even in the case of an unprecedentedly small number of detected photons available to calculate for each pixel.
In summary, Figs. 2(c), 2(d), 3(a), and 3(b) show that our mDCS noise model (solid lines) could predict the experimental results (solid circles) well without any fitting parameters. In particular, we confirmed experimentally the mDCS noise model prediction that the SNR is proportional to in the shot-noise limit. Our results confirm that we can increase the SNR beyond the limits imposed by the integration time and the photon count rate by increasing the ensemble size .
Conclusions and Outlook
We have demonstrated a scalable method for mDCS measurements to enhance the SNR gain by a factor of 32 with respect to single-speckle DCS using a kilopixel SPAD camera. Our technique can be used to achieve higher sensitivity to the cortex than traditional DCS using a larger source–detector distance (), while maintaining higher SNR. We have also extended the DCS noise model to the case of multiple independent speckle field measurements and validated our model experimentally. Our results indicate that our mDCS model works well even in extremely photon-starved conditions (down to four detected photons per pixel available to calculate ), thus enabling measurements at a large .
Due to the recent investments in LiDAR technology for automotive and consumer electronics applications, high-performance large-pixel-count SPAD cameras with improved detection efficiency are rapidly becoming more available and less expensive.30,37 While the first megapixel SPAD array was recently reported,31 for mDCS to take full advantage of the extraordinary advances of SPAD camera technology, we envision the need for real-time data compression schemes implemented in the read-out FPGA of the SPAD camera or directly on chip.30 Finally, our mDCS technique can also be implemented in the time domain to enhance sensitivity to deep tissue.38–41
Appendix A: Experimental Setup
In our experiments, we used a high-coherence-length (), 785-nm wavelength CW laser (Thorlabs, DBR785P) that was coupled to dynamic scattering media with an SMF ( core diameter, 730-nm cutoff wavelength). The source power was adjusted by an attenuator to produce up to 33 mW at the output of the SMF. We collected the light diffusing out of the dynamic scattering media with an MMF ( core diameter, 0.22 NA) coupled to a SPAD camera (Photon-Force, PF32). The SPAD camera consisted of 1024 SPADs arranged in a array, with a pixel pitch of and a pixel active area of . Each pixel was surrounded by electronics necessary to bias and quench the SPAD, as well as to record photon detection events, with a detection efficiency of at an excess bias of 1.7 V and wavelength of 785 nm. About of the pixels had an average dark count rate (DCR) of . Each pixel was configured to record between 0 and 127 detection events per frame. We ran the measurements in the photon-counting mode using frame exposure times in the range of , with the corresponding frame rate between 250 and 100 kfps. As shown in Fig. 1, our scattering media were a diffuser plate to calibrate the diameter of the speckles [Fig. 1(a)] and a liquid phantom to measure the SNR gain [Fig. 1(e)].
The diffuser plate consisted of a 1-in.-diameter ground glass diffuser (Thorlabs, DG10-120). The source fiber (SMF), the diffuser phantom, and the detector fiber (MMF) were arranged in a transmission geometry with a 10-mm gap between the phantom and the detector optode. The other end of the detector fiber was coupled to the SPAD camera with a translating lens tube that allowed us to adjust the MMF-SPAD distance in the range of to 200 mm. The image of the speckles was recorded using the SPAD camera at . The diameter of the speckles was determined using two-dimensional (2D) spatial autocorrelation of the speckle pattern as described in Appendix B. To measure the speckle turnover time trace, we mounted the diffuser plate on a motorized rotation stage and rotated the plate at an angular speed of .
The liquid phantom was a mixture of milk and water with a volume ratio of 1:8 in a black Noryl plastic container at room temperature. The temperature of the room was . We did not actively stabilize the temperature of the milk phantom. We brought the connectors of the source (SMF) and detector (MMF) fibers into contact with the surface of the mixture. The source–detector distance was . The other end of the MMF optode was coupled to the SPAD camera with an MMF-SPAD distance of to achieve . The frame exposure time was .
Appendix B: Calibrating the Diameter of the Speckles Using 2D Spatial Autocorrelation
In this experiment, we used a diffuser plate in transmission geometry with a 785-nm CW laser source, an SM source fiber optode ( core diameter, 0.13 NA), and an MM detector fiber optode ( core diameter, 0.50 NA). The size of the speckles could be adjusted by tuning the fiber-SPAD distance via the well-known relationship,32 , where is the average diameter of the speckles, is the laser wavelength, is the fiber-SPAD distance, and is the MMF detector fiber core diameter. Figure 4(a) shows three images of the speckle pattern recorded by the SPAD camera. As predicted, the average size of the speckles increased when we increased . We quantified the average diameter of the speckles by calculating the 2D spatial autocorrelation function of each image, as shown in Fig. 4(b). Figure 4(c) shows a linecut from panel (b) at and a Gaussian curve fit that gives the estimated as the value of the Gaussian. We repeated the measurement and fitting procedure at increasing for three different MMF detector fiber core diameters (, 400, and ). As shown in Fig. 4(d), the measured diameters (solid circles) were in good agreement with the calculated diameters (solid lines) based on the equation. The use of the 2D spatial autocorrelation was limited to speckle diameters larger than the size of a pixel pitch (). We used the calibrated from the above equation to estimate smaller speckle diameters. We applied this calibration to the milk phantom experiments, for which we used an MM detector fiber with core diameter and adjusted to obtain , matching the diameter of the speckles with the length of the pixel active area.
Appendix C: Effects of Speckle Size per Pixel on SNR
Since the fiber core diameter determines the diameter of the speckles on the SPAD pixel, it is important to investigate how using different fiber core diameters, thus the diameter of the speckles, affects the ensemble SNR of . We used the liquid phantom setup to perform the SNR versus fiber core diameter study. Figure 5(a) shows that larger fiber core diameters resulted in higher SNR than that of the smaller ones for the same . The SNR increased with decreasing for all of the fibers we tested. For the largest core fiber that we tested (which had core diameter), the SNR saturated at for .
The maximum SNR was achieved when we tuned the diameter of the speckles to be comparable to the length of the pixel active area (). We examined this by plotting the ensemble SNR of () (from all fiber core diameters) against the number of speckles per pixel active area, as shown in Fig. 5(b). The number of speckles per pixel active area was calculated as the ratio between the length of the pixel active area and the diameter of the speckles. Figure 5(b) shows that all of the data points fall on a single curve. This suggests that, for a given count rate () and integration period (), the SNR in mDCS depends solely on the number of speckles per pixel active area. The SNR gradually increased until it reached one or more speckles per pixel active area [Fig. 5(b)]. Here, denotes the number of speckles per pixel active area. While decreased with increasing , STD also decreased with increasing due to larger from having more detected speckles, thus keeping the SNR the same at active area. We observed that increased if we increased the diameter of the speckles, as predicted, and saturated beyond the Nyquist spatial sampling rate, which we reached when the diameter of the speckles was larger than twice the length of the pixel active area.35,36 This is consistent with the observed onset of saturated SNR for the fiber core diameter at [Fig. 5(a)]. The results presented in this work were achieved using a fiber core diameter at at which the diameter of the speckles matches the length of the pixel active area ().
Appendix D: Effects of Hot Pixels in SPAD Camera on SNR
Physical defects in SPAD pixels can effectively increase the DCR due to trapped carriers and after-pulsing. We refer to pixels that have high DCR () as hot pixels and those with low DCR () as cool pixels. We identified hot pixels in the kilopixel SPAD camera and investigated their effects on the intensity autocorrelation function and SNR of .
We characterized the DCR of the SPAD camera by recording the count rate during which the camera’s aperture was covered with a blank. Figures 6(a)–6(c) show the distribution of DCR across 146 hot pixels (white) and 878 cool pixels (blue). The hot pixels make up of the total pixels. Figure 6(d) shows that the mean DCR of hot pixels was 26.6 kcps and that of cool pixels was 24 cps, with a mean DCR of 3.8 kcps across all pixels.
We measured the ensemble average of (), which we referred to as the coherence factor [Fig. 6(e)], and SNR of () [Fig. 6(f)] from cool pixels (blue), all pixels (orange), and hot pixels (yellow) at decreasing count rate per pixel (). The integration time was , and the ensemble average was performed over 160 integration periods. Figure 6(e) shows that the ensemble average of the cool pixels (blue circles) resulted in values that remained constant over a wide range of . However, the ensemble average of the hot pixels (yellow circles) resulted in values that abruptly decreased at decreasing down to . The ensemble average of all pixels (orange) resulted in values that were slightly lower than those of the cool pixels down to . The minimum were different for the three different ensemble averages, which were given by the mean DCR values shown in Fig. 6(d). The corresponding SNR of values [Fig. 6(f)] showed a clear decrease in magnitude when the count rates approached the minimum . The SNR versus plot shows the importance of DCR characterization of the SPAD camera, in particular for DCS measurements at very low .
The decreasing values with decreasing were due to DCR contribution from each pixel, which is expressed as
We calculated the predicted values for each pixel using Eq. (6); they are plotted in Fig. 6(e) (solid lines) for corresponding ensemble averaging of cool pixels, hot pixels, and all pixels. The calculated values are consistent with the measurements. In particular, the DCR contribution to the decreasing values is apparent for the ensemble average of hot pixels.
Appendix E: Fitting Functions of
The functional form of the intensity autocorrelation function is determined by the dynamic and optical properties of the scattering media and the geometry surrounding the light source and the detector. In principle, we can predict the functional forms of either by directly estimating the resulting phase factor for all of the scattered electric fields for simple systems or by solving the correlation diffusion equation for more elaborate systems. The most commonly used dynamical scattering media in DCS experiments involve scatterers that are undergoing random flow motion (ballistic) or Brownian motion (diffusive). It has been shown15,16,20,21 that, in the case of ballistic motion, the functional form can be simplified to at small time lags, , where and are the absorption and reduced scattering coefficients of the medium, is the wavenumber of the light, is the percentage of light scattering events from moving scatterers, and is the standard deviation of the speed of the scatterers. Similarly, in the case of diffusive motion, at small time lags, , where is the diffusion coefficient of the moving scatterers.
In our work, the scattering media were a rotating diffusive plate [Fig. 1(a)] and a liquid phantom [Fig. 1(e)], and the details of the experimental setups were discussed in Appendix A. Figure 7 shows the normalized (solid circles) measured from the rotating diffuser plate [Fig. 7(a)] and the liquid phantom [Fig. 7(b)]. We also plotted the functional forms of that were used to fit the measured data: (red curve) and (blue curve). As shown in Fig. 7, the intensity autocorrelation function of the rotating diffuser plate can be fitted well with the ballistic motion functional form and that of the liquid phantom with the diffusive motion functional form over a wide range of time lags. The fitting results are consistent with the fact that the scatterers in the rotating diffusive plate behave like ballistic scatterers and those in the liquid phantom behave like diffusive scatterers.
We would like to thank Michael Choma, Patrick Mineault, Emily Mugler, and Professor David Boas for the valuable technical discussions. We are also grateful for the support and feedback from Facebook Reality Labs (FRL) Research, Brain-Computer Interface (BCI) team at Facebook. This research was supported by funding from Facebook.
Code, Data, and Materials Availability
The data that support the findings of this study are available from the corresponding author on reasonable request.
Edbert J. Sie is a research scientist at the Brain–Computer Interface (BCI), Facebook Reality Labs (FRL), developing high-sensitivity optical systems to measure neural activity. He received his PhD in physics from Massachusetts Institute of Technology (MIT) in 2017 and a postdoctoral fellowship from Stanford University, where he led research in ultrafast optics and imaging techniques, time-of-flight detection, quantum materials, and atomically thin semiconductors.
Hui Chen is an optical engineer in the BCI team at FRL Research at Facebook. He received his PhD in physics from the University of Connecticut in 2014. After that, he worked as an optical engineer in Calmar Laser and then as a development scientist at Corning Incorporated, before joining Facebook.
E-Fann Saung is an optomechanical engineer at FRL Research. He received his BS and MS degrees in mechanical engineering from Illinois Institute of Technology and San Diego State University, respectively, and is a licensed professional engineer in machine design and materials.
Ryan Catoen is a software engineer at FRL Research. Previously, he studied computer science at the University of Waterloo. His research interests focus on accelerating the development of next-generation noninvasive brain–computer interfaces.
Tobias Tiecke received his PhD in experimental quantum physics from the University of Amsterdam, the Netherlands, in 2009 and was a postdoctoral researcher in quantum optics at Harvard University. He currently works as a research scientist manager at Facebook, Inc.
Mark A. Chevillet is a research director at FRL leading research into a wearable silent speech interface featured by The Economist, IEEE Spectrum, and The Wall Street Journal. Previously, he directed research programs in BCI, artificial intelligence, and cognitive performance at Johns Hopkins University Applied Physics Laboratory, with joint appointments in neuroscience and cognitive science, the Science of Learning Institute, and the Kavli Neuroscience Discovery Institute.
Francesco Marsili is a research scientist manager at FRL developing diffuse optical techniques for the measurement of biosignals. Previously, he led research in single photon detectors, optical communication, quantum optics, superconductivity, and nanofabrication at the Jet Propulsion Laboratory, the National Institute of Standards and Technology, and MIT. He received his PhD in physics from the École Polytechnique Fédérale de Lausanne, Switzerland, in 2009.