Translator Disclaimer
17 August 2018 Lightweight sCMOS-based high-density diffuse optical tomography
Author Affiliations +
Abstract
Though optical imaging of human brain function is gaining momentum, widespread adoption is restricted in part by a tradeoff among cap wearability, field of view, and resolution. To increase coverage while maintaining functional magnetic resonance imaging (fMRI)-comparable image quality, optical systems require more fibers. However, these modifications drastically reduce the wearability of the imaging cap. The primary obstacle to optimizing wearability is cap weight, which is largely determined by fiber diameter. Smaller fibers collect less light and lead to challenges in obtaining adequate signal-to-noise ratio. Here, we report on a design that leverages the exquisite sensitivity of scientific CMOS cameras to use fibers with ∼30  ×   smaller cross-sectional area than current high-density diffuse optical tomography (HD-DOT) systems. This superpixel sCMOS DOT (SP-DOT) system uses 200-μm-diameter fibers that facilitate a lightweight, wearable cap. We developed a superpixel algorithm with pixel binning and electronic noise subtraction to provide high dynamic range (<105), high frame rate (<6  Hz), and a low effective detectivity threshold (∼200  fW  /  Hz1/2-mm2), each comparable with previous HD-DOT systems. To assess system performance, we present retinotopic mapping of the visual cortex (n  =  5 subjects). SP-DOT offers a practical solution to providing a wearable, large field-of-view, and high-resolution optical neuroimaging system.

1.

Introduction

Optical imaging has long held promise as a bedside neuroimaging technique. However, image quality has been a persistent challenge, particularly in comparison with the gold standard of functional brain imaging—functional magnetic resonance imaging (fMRI). Recently, high-density diffuse optical tomography (HD-DOT) has improved image quality dramatically.1,2 When matched within subjects against fMRI, HD-DOT now can obtain localization errors <5  mm and point spread functions with <15-mm full width at half maximum (FWHM), sufficient to localize functions to gyri.3 In addition to task activations, HD-DOT has been used for mapping distributed resting state functional connectivity (fcDOT) patterns. Although initial fcDOT reports were confined to sensory networks, such as visual and motor,4,5 recent results demonstrate the feasibility of mapping spatially distributed higher-level cognitive networks, including the dorsal attention and default mode networks.2

Despite these advances, HD-DOT has limited coverage and wearability due to mechanical challenges: increasing the coverage requires increasing the number of fibers, sources, and detectors, which results in heavier imaging caps and larger system sizes.5 Current large field-of-view HD-DOT caps, therefore, require significant infrastructure to support fiber weight in the cap. For routine and wide application, HD-DOT needs to become more wearable. One way to improve the wearability of the imaging cap is to reduce the fiber size; however, this directly reduces the amount of light transmitted to the detector. This poses a major challenge because HD-DOT requires low system noise and large dynamic range (e.g., DNR>105) due to the high attenuation coefficients of blood in the brain and the need for optode separations >2  cm to sample the cortex.1 Useful metrics for quantifying the detector noise requirements for HD-DOT are the noise equivalent power (NEP) of the detector (e.g., NEP <20  fW/Hz for a 3-mm detector) and detectivity of the system (e.g., D=46  fW/Hz-mm2), which are defined in Sec. 2.1.6

These performance requirements have led to discrete detector designs, generally avalanche photodiodes (APDs), as evidenced by the designs of current HD-DOT systems for imaging brain activity.1,711 A decrease in the fiber diameter in HD-DOT requires a detection scheme capable of achieving the same signal to noise ratio (SNR) and DNR specifications reported in previous systems with a significant decrease in photon flux. Here, we present a superpixel HD-DOT (SP-DOT) system that utilizes advancements in a megapixel CMOS sensor technology to overcome these technical challenges.

The reported SP-DOT enables HD-DOT imaging of brain activity with a lightweight cap. To compare specifications across multiple detectors, we discuss the concept of detectivity for optical neuroimaging systems and explore the effects from system parameters, such as frame rate, fiber size, and encoding strategy. System specifications and performance are tested with evaluations of raw data quality, via analyzes of the pulse waveform SNR in each channel, and with assessments of retinotopic mapping of visual activations in healthy adult volunteers, a benchmark for neuroimaging technologies. Collectively, these studies demonstrate the feasibility of SP-DOT and offer a practical solution to imaging brain activity using a lightweight wearable cap with HD-DOT spatial resolution.

2.

Methods

2.1.

Optical Neuroimaging System Specifications

To create a wearable, whole-head imaging system that weighs <1  lb, the commonly used 2.5-mm-diameter optical fibers need to be 10-fold smaller in diameter. Commercially available 200-μm-diameter optical fibers are closest to matching this requirement (NA=0.5, FP200URT, Thorlabs, New Jersey).2 However, the major challenge in redesigning HD-DOT with 200-μm-diameter fibers is the decrease in SNR that results from reducing the size of the fiber. To determine a detection design that enables the use of 200-μm-diameter fibers for HD-DOT, we begin by characterizing the light detection capabilities of APDs and CMOS sensors.

A useful metric for comparing these two detectors is detectivity, which is defined as the NEP divided by the area of the light collection

Eq. (1)

D=NEP/A,
where D is the detectivity and A is the area. Because detectivity and NEP are typically defined for APD modules, our analysis also requires establishing these metrics for CMOS sensors. NEP relates the bandwidth of a signal measured by a photodetector to the optical power required to achieve an SNR equal to 1

Eq. (2)

P0=NEPBW,
where P0 is the optical power required for an SNR of 1 and BW is the bandwidth of the signal. Consider two systems each with equal illumination, which have a fiber collecting light from tissue and delivering the light to detectors with the same NEP. Assuming the area of the light on the detector is equal to the area of the fiber tip, these two systems will have different detectivities depending on the diameter of the fiber used to collect light. For example, the system that uses a fiber that is 10-fold smaller in area will have a 10-fold higher detectivity according to Eq. (1).

A detectivity of 100  fW/Hz-mm2 is needed for high-quality HD-DOT.2 We compared the effective detectivity of six commercially available APDs to the superpixel sCMOS detection (Appendix B). The Hamamatsu C12703-01 had the lowest baseline NEP and the lowest effective detectivity with 200-μm-diameter fibers for HD-DOT (Table 2 and Appendix B). For this reason, we compared all the superpixel values with the C12703-01 APD module. Although the Hamamatsu C12703-01 module has the lowest detectivity of the detectors available on the market and is currently used for HD-DOT imaging with 2.5-mm-diameter optical fibers, the detectivity is still 70-fold too high for use with 200-μm-diameter fibers.2 Short source–detector separation measurements that sample the scalp and skull would have high SNR, but the longer separation measurements that sample the brain would have insufficient SNR. To measure these longer separation measurements, we require a detector with lower detectivity at a 200-μm-diameter collection.

Intriguingly, CMOS cameras have a 10,000-fold lower NEP at the single pixel level. Although the detectivity of a single pixel is sixfold higher than the APDs due to their small size, the fiber delivering light with 1:1 magnification will be a 200-μm-diameter circle on the sensor. Summing pixels within that 200-μm-diameter circle to create a “simple-binned” detector is predicted to match the requirements for HD-DOT imaging.

The rest of this section compares the CMOS characteristics to the commercially available APD module with the lowest NEP (Hamamatsu C12703-01) and is organized as follows: Sec. 2.1.1 explains the NEP, detectivity, and DNR specifications of the CMOS camera for individual pixels and simple-binned pixels at a standardized specification bandwidth of 1 Hz. Section 2.1.2 documents the 1-Hz specifications using a superpixel algorithm to sum the pixels while removing noise sources. Section 2.1.3 explores the effect of HD-DOT system parameters such as frame rate and encoding on the “effective” specifications of the superpixel system. Lastly, Sec. 2.1.4 details the specifications for the APD-based HD-DOT imaging system. All specifications discussed as follows are documented in Table 1.

Table 1

HD-DOT system specifications. Theoretical predictions and experimental measurements of specifications for HD-DOT at 830 nm. Effective NEP and detectivity for the 200-μm superpixel HD-DOT are calculated using 6-Hz frame rate, 25 encoding steps, and 1 wavelength. Effective specifications for the APD are calculated using 16 timesteps at 10 Hz. The area for simple-binned and superpixel calculations was determined by multiplying a single-pixel area by the number of pixels (N=697).

Diameter or side (mm)Area (mm2)DNRNEP (fW/√Hz)NEPeff (fW/√Hz)D (fW/mm2/√Hz)Deff (fW/mm2/√Hz)
HD-DOT requirements>1×105<100
APD (large fibers)2.54.911×105201138.146.1
APD (small fibers)0.20.0311×1052011312737202
Single pixel (theoretical)6.5×1034.23×1051.2×1042.2×1030.2524251
Simple binning and superpixel (theoretical)0.20.0293.2×1050.0584.72.0161
Simple binning (experimental)0.20.0291.5×1050.18.23.4277
Superpixel (experimental)0.20.0292.6×1050.075.72.4194

2.1.1.

Pixel and simple-binning NEP, DNR, and detectivity at 1 Hz

The single-pixel NEP at 1 Hz (NEPpix) is calculated by relating the 1-Hz noise floor of the pixel to the optical power incident upon the pixel (Appendix A). Throughout our calculations, we assume that the primary noise source that contributes to the NEP of CMOS pixels is read-out noise. At 830 nm, the Zyla 5.5 sCMOS NEPpix is 0.0022  fW/Hz, four orders of magnitude lower than the APD NEP of 20  fW/Hz. By summing N pixels together, the NEP of the summed pixels (NEPsum) is related to the NEP of a single pixel by

Eq. (3)

NEPsum(N)=NNEPpix.

The NEPsum increases as the square root of N because the variance of the read-out noise increases by a factor of N [Fig. 1(a) and Appendix A]. Using Eq. (3) with N=697  pixels corresponding to the size of the optical fiber on the CMOS chip, the theoretical NEPsum=0.058  fW/Hz (Table 1), 350-fold lower than the NEP of the C12703-01 APD modules (NEPAPD=20  fW/Hz).

Fig. 1

SP-DOT system specifications as a function of the superpixel size. (a) The NEP should increase as the square root of the number of pixels summed if the noise source is limited by read-noise (blue dashed line). Without subtracting off the correlated row noise (red line), the noise floor begins to deviate from the theoretical values at about 90 pixels. The size of the fiber tip on the CMOS sensor imaged at 1:1 magnification is 200  μm, denoted by the vertical dashed line. The (b) effective detectivity (Deff) and (c) DNR that are calculated based on the NEP are subsequently decreased and increased with larger N, respectively. After removing the correlated row noise with the superpixel algorithm (green line), the NEP, DNR, and Deff are within a factor of 2 of the theoretical values. In addition, the DNR satisfies the required specifications for HD-DOT neuroimaging denoted by the green-shaded areas. The Deff value is 4× higher than what is required for HD-DOT but this can be accommodated by a 4× increase in light input.

NPH_5_3_035006_f001.png

The 1-Hz detectivity of a pixel (Dpix) is defined as the NEPpix divided by the area of the pixel (Apix). For 830 nm, the Zyla 5.5 sCMOS, Dpix=52  fW/Hz-mm2, 24-fold lower than the detectivity of the APD system using 200-μm-diameter fibers (D=1270  fW/Hz-mm2).2 When summing pixels, the area increases linearly with the number of pixels summed. Combining this with, Eq. (3) provides the detectivity of summed pixels (Dsum)

Eq. (4)

Dsum(N)=NNEPpixNApix=DpixN.

Thus, the detectivity is inversely proportional to the N [Fig. 1(b)]. Using Eq. (4), the theoretical Dsum for a 200-μm-diameter circle of pixels (N=697) is 2.0  fW/Hz-mm2 (Table 1), 636-fold lower than the detectivity of the APD modules with 200-μm-diameter fibers (DAPD=1273  fW/Hz-mm2).

The single-pixel DNR (DNRpix) is defined as the full well of the pixel (w) divided by the noise floor (σ). A single pixel has a DNRpix of 1.2×104, two orders of magnitude lower than the APD and one order of magnitude too low for HD-DOT imaging. As the full well increases linearly with the number of pixels summed and the noise floor increases as the square root, the DNRsum increases as the square root of the number of pixels summed [Fig. 1(c)]

Eq. (5)

DNRsum(N)=NwNσ=NDNRpix.

Using Eq. (5), the theoretical DNRsum for a 200-μm-diameter circle of pixels on the sCMOS (N=697) is 3.2×105 (Table 1), approximately threefold higher than the required DNR for HD-DOT.2

To test Eqs. (3)–(5) experimentally, we measured the specifications for a sCMOS sensor (Zyla 5.5, Andor Technologies) using a 200-μm-diameter fiber relayed to the CMOS sensor using 11 magnification (Table 1). By experimentally summing N=697  pixels (6.5  μmwidth×6.5  μmheightpixels) in a circular shape that corresponded to the 200-μm-diameter optical fiber, we were able to measure the NEP of summed pixels and therefore calculate the summed pixel detectivity and DNR. The detectivity of the summed pixels is a factor of 2 higher than the theoretical values (experimental Dsum=3.4  fW/Hz-mm2 versus theoretical Dsum=2.0  fW/Hz-mm2). Similarly, the DNR of the summed pixels is a factor of 2 lower than the theoretical values (experimental DNRsum=1.5×105 versus theoretical DNRsum=3.2×105). The discrepancy between theoretical and experimental values of NEP and DNR is exacerbated at large fiber diameters (Fig. 1), limiting the use of fibers to below 100  μm. To use 200-μm-diameter optical fibers, we developed a superpixel referencing algorithm to improve the detectivity and DNR of the CMOS sensor.

2.1.2.

Superpixel NEP, detectivity, and DNR at 1 Hz

Although simple-binning of pixels improves the CMOS sensor’s detectivity and DNR, our experimental measurements of detectivity and DNR were worse than our theoretical predictions (Table 1 and Fig. 1). This mismatch between experimental results and Eqs. (3)–(5) was primarily caused by structured, non-Gaussian noise on the CMOS sensor. In addition to the shot-noise measured across the entire sensor (e.g., as measured in a background image), there is correlated noise within each row of the CMOS sensor caused by a voltage offset after each row readout [Fig. 2(a)]. Upon removal of this row noise, the temporal noise decreases by a factor of 2 [Fig. 2(b)].

Fig. 2

Correlated row noise. (a) Due to the readout structure of the sCMOS cameras, pixels within the same row have a correlated offset value after each readout. This can be demonstrated by plotting the values of two pixels as a function of time, which are either within the same column (pixels a and b) or same row (pixels c and d). The values of pixels a and b are not temporally correlated (r=0.1). The values of pixels c and d are highly correlated over time (r=0.7). (b) This correlated row noise can be removed by subtracting an average of multiple pixels within a row, reducing the temporal noise by 2×.

NPH_5_3_035006_f002.png

We calculated and removed the correlated row noise using a “superpixel algorithm.” The CMOS image of each fiber tip [Fig. 3(a)] was segmented into a core, buffer, and reference region using a system-specific mask [Fig. 3(b)]. Although the fiber tips required segmentation, the fiber tip locations and sizes did not change from experiment to experiment and thus the same segmentation mask was used run to run and day to day. The pixels segmented as the core regions are all pixels within a 100-μm-radius circle around a manually designated center pixel. This method based on distances ensured that the same number of pixels was summed for all fiber tips (N=697  pixels). The core region corresponds to the image of the fiber tip, the reference region provides measurements for determining the correlated row noise, and the buffer region serves to avoid any optical or electronic crosstalk from the core pixels into the reference region.

Fig. 3

Superpixel algorithm. (a) A single raw sCMOS image of 10 detector fiber tips is background subtracted. (b) The image is partitioned into super pixels around each fiber tip and the values of the reference pixels are used for calculating the correlated row noise. (c) After subtracting the row noise, all core values are summed to create the final superpixel value per detector.

NPH_5_3_035006_f003.png

The row noise was removed using the following algorithm. Per row (j), the row noise (Rj) is defined by averaging the pixel values in the reference region of that row. To remove the row noise, we needed to scale the row noise by the number of pixels (N) in that row within the core. Thus, the scaled row noise is N*Rj. The superpixel value per row of the core (SPj) is

Eq. (6)

SPj=i=1N(Ci)NRj.

Note that because the fiber tip is circular, the number of pixels summed (N) for the core varies per row. The total superpixel value for a single fiber tip is then the sum of all j rows of the core

Eq. (7)

SPtotal=j=1JSPj.

After applying the superpixel algorithm, a single CMOS image of 10 fiber tips is condensed into 10 superpixel values [Fig. 3(c)].

After removing these noise structures from the CMOS images, the noise [Fig. 2(b)], and therefore the NEP and detectivity of the 200-μm-diameter circle superpixel are decreased by 2× compared with simply summing pixels. The DNR increased approximately twofold using the superpixel algorithm. Now, the DNR (2.6×105) and detectivity (D=2.4  fW/Hz-mm2) are comparable with the same specifications of the APD using 2.5-mm-diameter optical fibers with 50% packing fractions, and closely match our theoretical predictions. Thus, with the superpixel summing algorithm, the DNR and detectivity for sCMOS detectors are promising for use in HD-DOT imaging when quoted at a 1-Hz bandwidth.

2.1.3.

Superpixel encoding: effective NEP and detectivity

HD-DOT requires higher data collection rates for temporal encoding of sources and to prevent sampling artifacts of biological signals. Therefore, the relatively slow data collection rates of CMOS cameras present challenges for successfully collecting the required amount of data for HD-DOT.1 All superpixel detectors collect light while a single source illuminates the tissue, which results in a single CMOS image acquisition for each source required for temporal encoding. To keep an overall “DOT frame rate” of the industry standard of 1 Hz, all K temporal encoding steps (number of source positions) are acquired in 1 s. As each temporal encoding step contains unique source–detector pair information, the amount of light collected per second is split evenly among all steps. The number of unique temporal steps, therefore, divides the flux into K different exposures. Additionally, each source is only turned on for a fraction of each temporal step, which is specified by the duty cycle (d).

Although there are multiple ways to account for this temporal encoding effect, here we assume a maximum illumination level from the sources. We therefore treat decreases in flux as effectively increasing the noise floor, thereby increasing the NEP. Because this treatment of NEP is a measure of the system, we are calling this value an effective NEP (NEPeff_SP). NEPeff_SP increases by a factor of K/d in comparison with the NEP of a single pixel because the photon flux decreases by a factor of K/d while the read-out noise remains the same

Eq. (8)

NEPeff_SP(N,K,d)=KNdNEPpix.

Data collection frame rate is typically collected faster than 1 Hz to improve signal detection and avoid aliasing of hemodynamic signals. For example, to properly sample heart rate that typically ranges between 50 and 80 beats per minute during quiet rest HD-DOT data should be collected at or above 2 Hz. This means that all K temporal encoding steps need to be collected at least twice in 1 s. The number of times that all K temporal encoding steps need to be collected will be termed the “DOT sampling rate” of the system. Therefore, for a DOT sampling rate of f Hz, f*K total CMOS images will be collected. After sampling the entire encoding cycle at f Hz, the data will be low-pass filtered and temporally resampled back down to 1 Hz. Because the data are resampled, the total flux per DOT frame does not change with f; however, the number of reads per pixel changes (i.e., the variance of the read-out noise increases by a factor of f). The effective NEP is therefore given as

Eq. (9)

NEPeff_SP(N,K,d,f)=KfNdNEPpix.

The resulting “DOT NEP” of the sCMOS 200-μm superpixel HD-DOT system at 6 Hz, 75% duty cycle, 25 encoding steps is experimentally measured to be NEPeff_SP=5.7  fW/Hz (Table 1).

As the detectivity is defined as the NEP divided by the collection area, the effective superpixel detectivity including all the parameters to run the HD-DOT system is

Eq. (10)

Deff_SP(N,K,d,f)=KdfNDpix.

The APD system with 2.5-mm-diameter fibers and 50% packing fraction produces high-SNR HD-DOT data because the APD’s effective detectivity is Deff=46  fW/Hz-mm2 (Table 1). We, therefore, require the HD-DOT CMOS Deff to be comparable or lower than that of the APD. With the superpixel algorithm, the Zyla 5.5 CMOS yields Deff=194  fW/Hz-mm2 for a 200-μm-diameter fiber [Fig. 1(c)], predicting that the CMOS camera would have fourfold lower Deff than the APD system if the same amount of light was used. We can compensate for this factor of 4× using laser diodes to increase the light level by fourfold, allowable by ANSI limits. In comparison, the Deff of the HD-DOT APD system using 200-μm-diameter fibers instead of 2.5 mm is 7202  fW/Hzmm2, two orders of magnitude are too high for neuroimaging even with increased illumination to the ANSI limit. Comparing the effective detectivity of commercially available APD modules and the superpixel system shows that while the Hamamatsu C12703-01 has the lowest detectivity out of the APD modules, the superpixel system has 40-fold lower detectivity (200-μm Hamamatsu C12703-01 Deff=7202  fW/Hz-mm2 versus superpixel Deff=194  fW/Hz-mm2).

2.1.4.

APD NEP and detectivity

The signal generated by APD detectors is fundamentally different than CMOS cameras, and therefore we must derive expressions for the effective NEP and detectivity of APDs, which accounts for the frame rate, number of encoding steps (K = number of source positions), and duty cycle. The effective NEP of APDs is with reference to the NEP provided by manufacturers, which is quoted at 1 Hz (NEPAPD_1  Hz).

In contrast to CMOS cameras, the NEP of APDs is dependent on shot noise. The signal collected by the APD is proportional to the collection time, which decreases by a factor of K/d, resulting in the NEP increasing by a factor of K/d. Due to the analog APD detector module, there is no added noise that results from increasing the frame rate if the data are downsampled to 1 Hz after collection (i.e., the effective APD NEP is independent of f). Therefore, the overall NEP for an APD HD-DOT system is given as

Eq. (11)

NEPeff_APD(K,d)=KdNEPAPD_1  Hz.

The resulting effective “DOT NEP” of the APD 2.5-mm-diameter HD-DOT system at 10 Hz, 16 encoding steps, 50% duty cycle, is 113  fW/Hz (Table 1).

The effective detectivity of the APD (Deff_APD) is calculated in the same manner as the CMOS: the 1-Hz detectivity (DAPD_1  Hz) is multiplied by the effects of encoding. Because we are calculating the effective detectivity of the APD HD-DOT system, the area needs to incorporate the packing fraction of the fiber bundles. The Deff_APD accounting for all the parameters to run the HD-DOT system is

Eq. (12)

Deff_APD(f,K,d)=KdDAPD_1  Hz.

The resulting effective “DOT detectivity” of the APD 2.5-mm-diameter HD-DOT system at 10 Hz, 16 temporal encoding steps, and 50% duty cycle, is Deff_APD=46  fW/Hz-mm2 (Table 1).

2.2.

Superpixel DOT System

To establish the feasibility of SP-DOT, we developed a 24-source and 28-detector system using a Zyla 5.5 sCMOS. The sources and detectors are located over the visual cortex to test the system performance with retinotopic mapping. This retinotopy system provides 264 usable source–detector measurements with a cap that weighs <1  lbs.

To relay the light from the detector fibers to the sCMOS chip, we designed dedicated detector boxes. The SMA end of the detector fibers screws into female SMA adapters on the custom-designed light-tight boxes. Inside the box, 0.75-m length 200-μm-diameter fibers (NA=0.5, FP200URT, Thorlabs, New Jersey) transmit light from the detector fibers to a dedicated aluminum block containing the cleaved fiber tips [Fig. 4(a)]. A 4f relay (consisting of two Nikon 50 mm f/1.2 Nikkor Ai-S Manual lenses) provides a 11 magnification of the cleaved fiber tips onto the Zyla 5.5 sCMOS chip (Andor, Belfast, UK). To take advantage of the horizontal readout speed of the sCMOS camera, the 28 detector fiber tips were positioned horizontally in the fiber array. The fibers were separated by 200  μm of aluminum to minimize optical crosstalk on the sCMOS chip, resulting in rows of 23 detectors.

Fig. 4

Superpixel HD-DOT system components. (a) The detector fibers terminate in an aluminum fiber array holder with the fibers separated by 200  μm (top). To take advantage of the readout speed of an sCMOS, the fibers are in rows of 36. The fiber array holder is imaged with a 1:1 magnification via a two-lens system (bottom). (b) The cap has bent fiber tips protruding through it that can slide in and out of the cap to match the curvature of the head. The source fibers have on average 2 mm of space to allow for the beam to expand for ANSI compliance. The detector fibers are flush with the scalp. Rubber stripping is used on the outside of the cap to provide a force toward the center of the cap. (c) The superpixel imaging cap consists of 24 sources and 28 detectors that cover the visual cortex. Due to the low weight of the fibers, the subject can support the cap and move around freely.

NPH_5_3_035006_f004.png

To optimize the amount of light reaching the head, we used laser diode sources (50 mW, 830 nm HL8338MG, 690 nm HL6750MG Thorlabs, New Jersey). The current delivered to the lasers was modulated by dedicated, high-bandwidth (20 MHz) digital input/output lines (PCI-6534; National Instruments, Austin, Texas). A single-source box contains 32 laser diode source channels and electronics in one standard 19-in. rack unit (height=1RU or 1.75 in.). To focus as much light as possible onto the 200-μm-diameter fibers (NA 0.5, FP200URT, Thorlabs, New Jersey), a custom-designed, three-dimensional (3-D) printed laser coupler was attached to the source box containing a 2.5-mm ball lens (N-BK7, Edmunds Optics) placed between the diode and fiber SMA tip (SMA905, 10230A, Thorlabs, New Jersey). The laser coupler transmitted 60% of the laser light into the fiber.

To improve the fiber coupling, ergonomics, and comfort of the imaging system cap, we bent the fibers into a 90-deg curve just before entering the cap [Fig. 4(b)]. Both source and detector fibers had an SMA905 tip on one end and a bent tip on the other to sit in the cap. All right-angle tips were custom-made and polished. To maintain rigidity within the cap, a clear plastic sleeve was glued around the fiber tip that passed through the cap. To provide a safe and comfortable surface that sits against the scalp, biocompatible, translucent epoxy (UV10MED, Masterbond, New Jersey) covered the tip of the fiber [Fig. 4(b)]. An average of 2 mm of the clear epoxy was placed on the tip of the fibers to allow for the light to expand to a beam diameter of 2 mm once entering the skin (0.2  mW/mm2). Detector fibers had the minimum amount of epoxy needed to sufficiently cover the tip, averaging <0.2  mm. In total, the retinotopy SP-DOT system consisted of 76, 5-m-long optical fibers with an SMA connector on one end and a 90-deg curve on the other.

The imaging cap determines the spatial location of the source and detector fibers and maintains the coupling between the fibers and the scalp [Fig. 4(c)]. To provide the general shape of the cap, the outermost layer of the cap is composed of rigid thermoplastic molded to an anatomically correct adult-sized head model. A layer of soft neoprene on the inside of the thermoplastic provides comfort and supports the fibers. The array of sources and detectors is placed in two interlaced rectangular arrays with first-through third-nearest neighbor separations of 1.3, 3.0, and 3.9 cm, respectively. Fibers are kept perpendicular and pressed inward toward the scalp by rigid spacers and rubber stripping on the outside of the cap. To improve coupling of the fibers with the scalp, the fiber tips extend 7  mm through the interior of the cap to facilitate combing of the fibers through the hair. Lastly, the cap is held in place against the head with hook-and-loop straps across the forehead of the subject.

2.3.

Superpixel DOT Acquisition and Algorithm

To successfully use the Andor sCMOS for HD-DOT neuroimaging, the system utilizes spatial and temporal encoding, noise reduction algorithms, and superpixel summing. Individual camera images are 2560 pixels wide by 175 pixels tall [Fig. 3(a)]. This asymmetric region of interest is used to permit faster frame rates. Sets of 25 images (24 source positions and one dark frame) are collected at 6 Hz resulting in a camera frame rate of 150 Hz. The image exposure length is, therefore, 6.7 ms and the laser exposure time with a 75% duty cycle is 5 ms. The light collected by the detector fiber is spread across the CMOS chip as a 31-pixel diameter circle [Fig. 3(b)]. The superpixel algorithm converts individual pixel counts to source–detector pair light levels.

2.4.

Subjects and Stimulus Protocol

The research was approved by the Human Research Protection Office of the Washington University School of Medicine, and informed consent was obtained from all participants before scanning. We recruited five healthy adult subjects (four females and one male). Subjects sat in an adjustable chair facing a 19-in. liquid crystal display at a 90-cm viewing distance.3 We positioned the SP-DOT imaging cap such that the optode array was centered circumferentially on the back of the head with the center of the bottom row of fibers on the inion.

All presented visual stimuli were radial, reversing, black-and-white grids (10-Hz reversal) on a 50% gray background. The paradigm consisted of an angularly swept radial grid wedge that rotated at 10  deg/s to complete a sweep of the entire visual field every 36 s. The grid rotated clockwise three times per stimulus run. The stimulus began with the grid at the bottom of the screen, resulting in a left visual field stimulus at 10 s and a right visual field stimulus at 35 s accounting for the hemodynamic delay. Gray screens were presented for the 30 s before and 15 s after the complete sweep sequence to allow for the visual cortex to return to baseline. The total length of one run with three wedge rotations was, therefore, just under 3 min.

2.5.

Functional Data Analysis

Changes in absorption coefficient were calculated by normalizing each source–detector pair measurement by their mean and log transformation. The SNR in the heart rate frequency band was calculated as the average power in the pulse frequencies (0.5 to 1.5 Hz) normalized by the total power in the higher frequency noise band (2 to 3 Hz). The first-nearest neighbor (1-nn) pulse SNR per source or detector is calculated as the mean pulse SNR of all the 1-nn measurements of that particular source or detector.

Log-ratio measurement channels were bandpass filtered to 0.008 to 0.2 Hz. Superficial and systemic hemodynamic artifacts were removed by regressing the averaged 1-nn measurements from all other measurements. First, second, and third nearest-neighbor measurements that exhibited a standard deviation <2.5% of the mean signal were used for the image reconstruction. Volumetric reconstructions of absorption coefficients at 830 nm were obtained using a single sensitivity matrix based on the segmented head model of the MNI atlas.12,13 Sensitivity matrices were inverted using a Tikhonov regularization constant of λ=0.01 and a spatially variant regularization parameter of β=0.1.2

3.

Results

3.1.

Superpixel DOT Algorithm and Specifications

The superpixel algorithm leverages the millions of pixels on a single CMOS chip to increase the DNR and decrease the effective detectivity. The full set of specifications for SP detection is shown in Table 1.

3.2.

Superpixel DOT Data Quality

The SP-DOT system and cap produce high-quality data, as measured by a light fall-off curve and high SNR calculated from pulse physiology. The light fall-off curve, which measures the source–detector measurement light levels as a function of their distance, demonstrates the desired log-linear fall off, characteristic of light propagation through biological tissue [Fig. 5(a)]. A cap that does not couple the fibers well to the head would have a shallow, flatter fall-off, and larger variations within similar source–detector separations.

Fig. 5

SP-DOT data quality. (a) Data from a healthy volunteer (red measurements) show the expected log-linear decay of light levels with source–detector distance. The spread in the nearest neighbor pairs is less than two orders of magnitude, indicating that there is good coupling between the scalp and all fibers. The first, second, and third nearest-neighbor pairs have SNR>20  dB based on the intensity values compared with the noise floor (blue measurements). (b) Qualitatively, individual detector channels have high enough SNR to show the pulse waveform in the time domain and the frequency domain at 1  Hz. (c) A map of this pulse SNR for all source and detector fibers ensures that all fibers are well coupled to the head with SNR>10  dB.

NPH_5_3_035006_f005.png

Each individual detector has sufficient SNR to measure the pulse waveform in both the temporal and frequency domains [Fig. 5(b)]. The SNR of this physiological signal for each detector should have an SNR greater than the experimentally derived threshold of 10 dB. We calculate the SNR by dividing the maximum power in the pulse frequencies by the average power in the higher frequency noise. These values, plotted for each source and detector for both first- and second-nearest neighbors, demonstrate a satisfactory cap-fit [Fig. 5(c)].

3.3.

Visual Activations

For the quadrant-arranged visual stimulations, the images acquired with the SP-DOT system demonstrate neural activations that are both temporally and spatially resolved. The left and right visual cortices are spatially resolved in all participants (n=5) by block averaging only three stimulus repetitions per subject [Fig. 6(a)]. Note that all activations are flipped up-down and left-right due to the anatomy of the visual system. When the stimulus is in the bottom right hand corner of the visual field, a positive change in the absorption coefficient is seen in the left visual cortex [Fig. 6(a), top row]. Similarly, when the stimulus is in the bottom left-hand corner of the visual field, a positive change is seen in the right visual cortex [Fig. 6(a), bottom row]. All activation volumes are normalized to their respective maximum values and thresholded to only show values >40% of the maximum value.

Fig. 6

In-vivo retinotopic mapping. Subjects viewed a rotating, wedge-shaped checkerboard to stimulate the visual cortex. (a) Left and right block-averaged (n=3 blocks) visual activations can be separately resolved in all five subjects and in the group average (n=5 subjects). (b) Binary mask of the voxels with a value above 40% of the maximum. The overlap map shows the number of subjects with an activated voxel for either the left or right activation, showing moderate overlap incidence between subjects.

NPH_5_3_035006_f006.png

Binary masks of the voxels with a change >40% show symmetrical activations from left and right stimuli [Fig. 6(b)]. Similar spatial results are seen in all five subjects with intersubject variability, as seen with previous HD-DOT systems.3 Although an incidence map of the activation overlap shows variability across individual subjects, the activations are generally colocalized [Fig. 6(b)]. Intersubject comparisons might be improved by ensuring consistent cap placement with better cap design and acquiring subject-specific coregistration of the data to anatomical images. The localization differences measured could also be due to variability in the functional architecture of each subject’s brain.2,13,14

4.

Discussion

Our results show that the superpixel approach to detection using sCMOS cameras can reduce the weight of HD-DOT imaging arrays by >30-fold compared with standard APD-based HD-DOT systems. Although groups have previously developed CCD-based DOT systems,1518 these systems have either been too slow to record hemodynamic activity (frame rate <0.01  Hz) or they have been used in geometries that required only limited DNR, such as small volumes (e.g., mouse15,1820) or transmission mode measurements15,21 that are very limited for human neuroimaging applications. HD-DOT systems that have had success measuring human neuronal activity using near-infrared wavelengths have all used discrete detectors.8 This superpixel approach leverages a monolithic imaging sensor and uses a combination of pixel summing, electronic noise reduction, and spatiotemporal encoding to obtain high DNR (DNR>1×105) and low effective detectivity (Deff200  fW/Hz/mm2) at a high frame rate (>6  Hz) using 200-μm-diameter fibers. In addition, our current retinotopic imaging cap weighs only 0.16 lbs including 1 meter of fiber length, 30 times lighter than standard imaging caps used in APD-based HD-DOT systems with the same number of source–detector pairs.

Our results show that the SP-DOT imaging system and cap provide high quality in-vivo retinotopy activations across multiple subjects. Requiring only three stimulus blocks to produce block-averaged maps in five subjects proves the utility of the SP-DOT system. The structure of the SP-DOT imaging cap is designed to both improve data quality and prevent discomfort by pre-shaping the cap to match the typical head curvature, poking the fiber tips through the cap to comb through the hair, and enabling easy cap placement with hook and loop fastenings.

The reported SP-DOT system demonstrates the potential that sCMOS-based HD-DOT has on improving fiber-based DOT methods. A benefit of using CMOS cameras over other detectors such as APDs is that CMOS cameras can be significantly cheaper. The Andor Zyla sCMOS camera used in the SP-DOT system costs $10,000 but due to economic demand, primarily due to their use in cellphone cameras, their cost is decreasing and their performance is improving.22 Demand for higher image resolution and smaller cellphones drives the miniaturization of these sensors. Better picture quality including higher DNR for low-light images drives the sensitivity and performance of the sensors. All of these specification improvements will benefit cell phone cameras, as well as scientific systems like the SP-DOT system.

Fiberless systems have been developed by placing the source and essential detector components directly on the subject’s head. Although these fiberless caps may be more wearable due to a lack of optical fibers, the systems reported thus far have very few measurement channels in the range of 6 to 86.2326 The superpixel system outperforms those systems by having 264 measurement channels, which translates into higher spatial resolution and/or a larger field-of-view. In the future, it is possible that fiberless systems can be scaled up to whole-head coverage, but thus far this capability has not been shown.

There are several natural extensions to this prototype system, including multiple wavelengths for spectroscopy and larger channel counts for larger field of view. To measure changes in oxy-hemoglobin concentration instead of only the absorption coefficient, an additional wavelength must be used.27 The diameter of the right angle housing is wide enough to accommodate two 200-μm-diameter fibers. Each fiber could deliver one wavelength. Using a second wavelength will only increase the effective detectivity by a factor of 2 [Eq. (10)]. To increase the field of view of the current system, more source and detector fibers must be added with appropriate encoding and decoding. Similar to previously published HD-DOT systems,2 the encoding and decoding design of the current SP-DOT system allows for the easy expansion of the field of view. The temporal encoding structure of the 24 fibers situated in rectangular panels of four fibers by six fibers separates the groups enough such that each group can operate simultaneously without crosstalk. Adding more sources to a cap, therefore, will not slow down the acquisition time and overall frame rate of the collected data. Similarly, the groups of detector fibers imaged onto the sCMOS camera are separated on the chip enough to prevent crosstalk between two panels. The size of a single image (2560  pixels×175  pixels) can accommodate up to 111 detector fibers per camera. A sCMOS-based superpixel system using lasers can, therefore, accommodate 111 detectors and 120 dual-wavelength sources with an effective detectivity below 100  fW/Hz/mm2.

An HD-DOT system would need approximately 96 sources and 92 detectors for achieving a field-of-view large enough for mapping language processing areas of the brain.2 To create a SP-DOT system with this configuration, the console will only require two detector boxes (7 RU or 12 in. height) and 6 RU (10.5 in. height) worth of source boxes. Including a computer, source, and detector components, along with encoding and synchronization hardware, such a system would only take up 12  ft2 (4-ftheight×19-in.width×2-ftdepth) compared with 44  ft2 required for a 96 source and 92 detector APD-based system (two racks each 7-ftheight×19-in.width×2-ftdepth).2 The superpixel sCMOS approach allows for a fourfold reduction in HD-DOT system size compared with APD-based systems without sacrificing system resolution. Additionally, a whole-head cap with 96 detector fibers and 184 dual-wavelength source fibers would weigh only 0.64 lb.

Combined with HD-DOT image quality, the portability and wearability of the SP-DOT system could significantly improve studies requiring a naturalistic environment.28 In-person social interactions could be investigated while the subjects sit comfortably on a couch. SP-DOT could also be used in the clinic for continuous, longitudinal imaging of hospital patients while they lie in their bed, which could impact clinical care decisions and improve patient outcomes. In addition, SP-DOT could enhance applications to newborn and childhood medicine including monitoring in neonatal intensive care units and cooperation from toddlers to study their neural development.29

Appendices

Appendix A:

Calculating the NEP of a Single Pixel

The minimum power required to achieve an SNR of 1 can be related to the photoelectrons generated by the pixel. First, we express the optical power in terms of photoelectrons

Eq. (13)

P0=hcϕλ=hcλ(Qtηλ),
where P0 is the optical power, h is the Planck’s constant, ϕ is the photon flux in units of photons/sec, λ is the wavelength of the photon, Q is the number of photoelectrons generated, t is the exposure time, and ηλ is the quantum efficiency at wavelength λ. Plugging in Eq. (13) into Eq. (2), the NEP can be defined as a function of photoelectrons

Eq. (14)

NEP=QhcλtηλBW.

To derive an expression for the NEP of a single pixel, we need to determine the number of photoelectrons required to achieve an SNR of 1. For a cooled CMOS sensor, the primary noise sources are shot noise, dark noise, and read-out noise

Eq. (15)

SNR=QQ+idt+Nr2,
where id is the dark current, t is the integration time, and Nr is the rms read-out noise. By setting the SNR equal to one and assuming the shot noise and dark current are negligible at low light levels, we arrive at an expression for the photoelectrons required for an SNR of 1

Eq. (16)

Q=Nr.

Finally, the NEP at 1 Hz of a single pixel is given by

Eq. (17)

NEP=Nrhcληλ.

The Zyla 5.5 sCMOS has an rms read-out noise of 2.5 electrons and quantum efficiency of 27.5% at 830 nm, which results in an NEP of 0.0022  fW/Hz. If N pixels are summed, then the variance of the readout noise increases by a factor of N, and the number of photoelectrons required per pixel for achieving an SNR of 1 increases by N.

Appendix B:

Comparing Commercially Available APD Modules

We compared six commercially available APD modules: Hamamatsu C12703, Hamamatsu C12703-01, Excilitas 902-200, Excilitas 954-200, and the Laser Components S500-10. The module with the lowest specification NEP is the C12703-01 3-mm-diameter APD module. Consequently, this module also has the lowest effective detectivity using 200-μm-diameter fibers [Table 2, Eq. (12), temporal encoding steps K=16, duty cycle d=0.5, and 50% packing fraction]. Custom module designs using 200-μm-diameter APDs and cooling may be possible but at the moment has not been realized in a commercially available product.

Table 2

APD module comparison. The Hamamatsu C12703-01 detector has the lowest effective detectivity of commercially available modules.

Diameter (mm)NEP (fW/√Hz)D (fW/mm2)Deff (fW/mm2/√Hz)
Excilitas 902-2000.542133815,133
Excilitas 954-2000.5110350339,634
Laser Components S500-100.53095510,809
Hamamatsu C127031.5200636972,062
Hamamatsu C12703-013206377202

Disclosures

The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Acknowledgments

This study was supported by the National Institutes of Health [R01NS090874 (JPC), K01MH103594 (ATE), and R21MH109775 (ATE)] and a fellowship from the Spencer T. and Ann W. Olin Fellowship at Washington University in St. Louis (KMB). The authors would like to thank David Muccigrosso and Annie Bice for help building the optical fibers in addition to Andrew K. Fishell, Matthew D. Reisman, and Patrick W. Wright for helpful discussions and support.

References

1. 

B. W. Zeff et al., “Retinotopic mapping of adult human visual cortex with high-density diffuse optical tomography,” Proc. Natl. Acad. Sci. U. S. A., 104 (29), 12169 –12174 (2007). https://doi.org/10.1073/pnas.0611266104 PNASA6 0027-8424 Google Scholar

2. 

A. T. Eggebrecht et al., “Mapping distributed brain function and networks with diffuse optical tomography,” Nat. Photonics, 8 (6), 448 –454 (2014). https://doi.org/10.1038/nphoton.2014.107 Google Scholar

3. 

A. T. Eggebrecht et al., “A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping,” NeuroImage, 61 (4), 1120 –1128 (2012). https://doi.org/10.1016/j.neuroimage.2012.01.124 NEIMEF 1053-8119 Google Scholar

4. 

B. R. White et al., “Mapping the human brain at rest with diffuse optical tomography,” Conf. Proc. IEEE Eng. Med. Biol. Soc., 2009 4070 –4072 (2009). https://doi.org/10.1109/IEMBS.2009.5333199 Google Scholar

5. 

B. R. White et al., “Bedside optical imaging of occipital resting-state functional connectivity in neonates,” NeuroImage, 59 (3), 2529 –2538 (2012). https://doi.org/10.1016/j.neuroimage.2011.08.094 NEIMEF 1053-8119 Google Scholar

6. 

P. L. Richards, “Bolometers for infrared and millimeter waves,” J. Appl. Phys., 76 (1), 1 –24 (1994). https://doi.org/10.1063/1.357128 Google Scholar

7. 

N. M. Gregg et al., “Brain specificity of diffuse optical imaging: improvements from superficial signal regression and tomography,” Front. Neuroenerg., 2 14 (2010). https://doi.org/10.3389/fnene.2010.00014 Google Scholar

8. 

C. Habermehl et al., “Somatosensory activation of two fingers can be discriminated with ultrahigh-density diffuse optical tomography,” NeuroImage, 59 (4), 3201 –3211 (2012). https://doi.org/10.1016/j.neuroimage.2011.11.062 NEIMEF 1053-8119 Google Scholar

9. 

S. P. Koch et al., “High-resolution optical functional mapping of the human somatosensory cortex,” Front. Neuroenerg., 2 12 (2010). https://doi.org/10.3389/fnene.2010.00012 Google Scholar

10. 

D. K. Joseph et al., “Diffuse optical tomography system to image brain activation with improved spatial resolution and validation with functional magnetic resonance imaging,” Appl. Opt., 45 (31), 8142 –8151 (2006). https://doi.org/10.1364/AO.45.008142 Google Scholar

11. 

A. Bluestone et al., “Three-dimensional optical tomography of hemodynamics in the human head,” Opt. Express, 9 (6), 272 –286 (2001). https://doi.org/10.1364/OE.9.000272 Google Scholar

12. 

J. Mazziotta et al., “A probabilistic atlas and reference system for the human brain: International Consortium for Brain Mapping (ICBM),” Philos. Trans. R. Soc. Lond. Ser. B, Biol. Sci., 356 (1412), 1293 –1322 (2001). https://doi.org/10.1098/rstb.2001.0915 Google Scholar

13. 

X. Wu et al., “Quantitative evaluation of atlas-based high-density diffuse optical tomography for imaging of the human visual cortex,” Biomed. Opt. Express, 5 (11), 3882 –3900 (2014). https://doi.org/10.1364/BOE.5.003882 Google Scholar

14. 

S. L. Ferradal et al., “Atlas-based head modeling and spatial normalization for high-density diffuse optical tomography: in vivo validation against fMRI,” NeuroImage, 85 (Pt 1), 117 –126 (2014). https://doi.org/10.1016/j.neuroimage.2013.03.069 NEIMEF 1053-8119 Google Scholar

15. 

J. P. Culver et al., “Three-dimensional diffuse optical tomography in the parallel plane transmission geometry: evaluation of a hybrid frequency domain/continuous wave clinical system for breast imaging,” Med. Phys., 30 (2), 235 –247 (2003). https://doi.org/10.1118/1.1534109 Google Scholar

16. 

S. V. Patwardhan and J. P. Culver, “Quantitative diffuse optical tomography for small animals using an ultrafast gated image intensifier,” J. Biomed. Opt., 13 (1), 011009 (2008). https://doi.org/10.1117/1.2830656 JBOPFO 1083-3668 Google Scholar

17. 

V. Ntziachristos et al., “Fluorescence molecular tomography resolves protease activity in vivo,” Nat. Med., 8 (7), 757 –760 (2002). https://doi.org/10.1038/nm729 Google Scholar

18. 

V. Ntziachristos and R. Weissleder, “Charge-coupled-device based scanner for tomography of fluorescent near-infrared probes in turbid media,” Med. Phys., 29 (5), 803 –809 (2002). https://doi.org/10.1118/1.1470209 Google Scholar

19. 

R. E. Nothdurft et al., “In vivo fluorescence lifetime tomography,” J. Biomed. Opt., 14 (2), 024004 (2009). https://doi.org/10.1117/1.3086607 JBOPFO 1083-3668 Google Scholar

20. 

M. D. Reisman et al., “Structured illumination diffuse optical tomography for noninvasive functional neuroimaging in mice,” Neurophotonics, 4 (2), 021102 (2017). https://doi.org/10.1117/1.NPh.4.2.021102 Google Scholar

21. 

R. Choe et al., “Differentiation of benign and malignant breast tumors by in-vivo three-dimensional parallel-plate diffuse optical tomography,” J. Biomed. Opt., 14 (2), 024020 (2009). https://doi.org/10.1117/1.3103325 JBOPFO 1083-3668 Google Scholar

22. 

Z. S. Ballard, A. Ozcan, “Wearable optical sensors,” Mobile Health: Sensors, Analytic Methods, and Applications, 313 –342 Springer International Publishing, Cham (2017). Google Scholar

23. 

S. K. Piper et al., “A wearable multi-channel fNIRS system for brain imaging in freely moving subjects,” NeuroImage, 85 (Pt 1), 64 –71 (2014). https://doi.org/10.1016/j.neuroimage.2013.06.062 NEIMEF 1053-8119 Google Scholar

24. 

D. Chitnis et al., “Functional imaging of the human brain using a modular, fibre-less, high-density diffuse optical tomography system,” Biomed. Opt. Express, 7 (10), 4275 –4288 (2016). https://doi.org/10.1364/BOE.7.004275 Google Scholar

25. 

A. M. Chiarelli et al., “Characterization of a fiber-less, multichannel optical probe for continuous wave functional near-infrared spectroscopy based on silicon photomultipliers detectors: in-vivo assessment of primary sensorimotor response,” Neurophotonics, 4 (3), 035002 (2017). https://doi.org/10.1117/1.NPh.4.3.035002 Google Scholar

26. 

P. Pinti et al., “Using fiberless, wearable fNIRS to monitor brain activity in real-world cognitive tasks,” J. Visual. Exp., 106 53336 (2015). https://doi.org/10.3791/53336 Google Scholar

27. 

S. Wray et al., “Characterization of the near infrared absorption spectra of cytochrome aa3 and haemoglobin for the non-invasive monitoring of cerebral oxygenation,” Biochim. Biophys. Acta, 933 (1), 184 –192 (1988). https://doi.org/10.1016/0005-2728(88)90069-2 Google Scholar

28. 

M. S. Hassanpour et al., “Mapping cortical responses to speech using high-density diffuse optical tomography,” NeuroImage, 117 319 –326 (2015). https://doi.org/10.1016/j.neuroimage.2015.05.058 NEIMEF 1053-8119 Google Scholar

29. 

S. L. Ferradal et al., “Functional imaging of the developing brain at the bedside using diffuse optical tomography,” Cereb. Cortex, 26 (4), 1558 –1568 (2016). https://doi.org/10.1093/cercor/bhu320 Google Scholar

Biography

Karla M. Bergonzi received her BS degree in imaging science from Rochester Institute of Technology in 2011 and her PhD in biomedical engineering from Washington University in St. Louis in 2017. Her research focuses on the construction of biomedical optical imaging systems as well as the signal and image processing. She is now working as a postdoctoral researcher at the University of Pennsylvania.

Tracy M. Burns-Yocum received her BA degree in philosophy-neuroscience-psychology from Washington University in St. Louis in 2014. She has worked as a research assistant in Dr. Culver’s lab for the last two years. She is now pursuing her PhD in psychology at Indiana University Bloomington.

Jonathan R. Bumstead is a postdoctoral researcher at Washington University in Saint Louis conducting research in Dr. Joseph Culver’s lab. His work focuses on the development of multiscale two-photon microscopy and wide-field optical intrinsic signal imaging. Before completing his PhD at Washington University, he received his BS degree in physics and his BSE degree in mechanical engineering from the University of Pittsburgh.

Elise M. Buckley is a junior at Ave Maria University in Ave Maria, Florida, pursuing a BA in physics. She has worked as a summer research student in Dr. Culver’s lab at Washington University in St. Louis for the last four years. She has also assisted in research on Microfading at Ave Maria.

Patrick C. Mannion received his BS degrees in biomedical engineering and chemical engineering from Colorado State University in 2018. He worked as a summer research student in Dr. Culver’s lab at Washington University in St. Louis for two years and since has worked as a research student at Colorado State University for three years in addition to currently being a Controls Engineer for Barry–Wehmiller Design Group.

Christopher H. Tracy received his BS degree in food, agricultural, and biological engineering from Ohio State University in 2018. He worked as a summer research student in Dr. Culver’s lab at Washington University in St. Louis for the last three years and is now a research technician in Dr. Culver’s lab. While at the Ohio State University, he worked on a capstone project that designed a thermal system to eliminate Varroa mite infestations in beehives.

Eli Mennerick is a sophomore at Yale University in New Haven, Connecticut, where he is pursuing his BS in biology. He has worked as a summer research assistant in Dr. Culver’s lab at Washington University in St. Louis for the last two years. He has also worked in macrobiology, acting as a research assistant for projects on ecology and biodiversity.

Silvina L. Ferradal received her BS degree in electrical engineering from the Universidad Nacional de Rosario, Argentina, and her PhD in biomedical engineering from Washington University in St. Louis, USA. Following a postdoctoral fellow position at the Fetal-Neonatal Neuroimaging and Developmental Science Center at Boston Children’s Hospital, Harvard Medical School, she became a research consultant. She has extensive experience in advanced optical and magnetic resonance imaging techniques to investigate early brain development in neonates and infants.

Hamid Dehghani received his BSc degree in biomedical and bioelectronic engineering from the University of Salford, Salford, United Kingdom, in 1994, his MSc degree in medical physics and clinical engineering, and his PhD in medical imaging from Sheffield Hallam University, Sheffield, United Kingdom, in 1999. Currently, he is a professor of medical imaging at the School of Computer Science, University of Birmingham, United Kingdom. His research interests include development of biophotonics-based methods for clinical and preclinical applications.

Adam T. Eggebrecht is an instructor of radiology at Washington University School of Medicine. His research focuses on developing analytical and optical tools to provide portable and wearable systems for minimally constrained methods to map brain function that extend functional neuroimaging beyond current limitations. These methods allow brain imaging in populations unsuited to traditional methods due to discomfort with current technology or implanted devices that are not able to be studied with MRI.

Joseph P. Culver is a professor of radiology, physics, and biomedical engineering at Washington University in Saint Louis. His lab explores ways of exploiting noninvasive optical measurements for both functional- and molecular-biological imaging. Specifically, his group develops subsurface optical tomography for imaging intrinsic, hemoglobin-sensitive contrasts, and exogenous, molecularly targeted-fluorescent contrasts. His lab also develops fluorescence tomography systems and methods to image the biodistribution of molecularly targeted probes in small animal models of disease.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Received: 2 June 2018; Accepted: 25 July 2018; Published: 17 August 2018
JOURNAL ARTICLE
11 PAGES


SHARE
Advertisement
Advertisement
Back to Top