Open Access
1 September 2005 Image quality improvement via spatial deconvolution in optical tomography: time-series imaging
Yong Xu, Yaling Pei, Harry L. Graber, Randall Locke Barbour
Author Affiliations +
Abstract
We present the fourth in a series of studies devoted to the issue of improving image quality in diffuse optical tomography (DOT) by using a spatial deconvolution operation that seeks to compensate for the information-blurring property of first-order perturbation algorithms. Our earlier reports consider only static target media. Here we report spatial deconvolution applied to media with time-varying optical properties, as a model of tissue dynamics resulting from varying metabolic demand and modulation of the vascular bed. Issues under study include the influence of deconvolution on the accuracy of the recovered temporal and spatial information. The impact of noise is also explored, and techniques for ameliorating its information-degrading effects are examined. At low noise levels (i.e, ≤5% of the time-varying signal amplitude), spatial deconvolution markedly improves the accuracy of recovered information. Temporal information is more seriously degraded by noise than is spatial information, and the impact of noise increases with the complexity of the time-varying signal. These effects, however, can be significantly reduced using simple noise suppression techniques (e.g., low-pass filtering). Results suggest that the deconvolution scheme should provide considerable enhancement of the quality of spatiotemporal information recovered from dynamic DOT techniques applied to tissue studies.

1.

Introduction

Near-infrared diffuse optical tomography (DOT) has received increasing attention for over a decade,1 in large measure because of its demonstrated ability to image turbid tissues—including human breast,2, 3, 4, 5 brain,6, 7, 8, 9 and joints10, 11 as well as small animals12, 13in vivo. Compared to conventional medical imaging techniques such as x-ray computed tomography, ultrasound, and magnetic resonance imaging (MRI), DOT can provide physiological information about molecular-level changes in tissue, including contrast information on hemoglobin oxygenation states, water and lipid content, and tissue scattering properties14 (amplitude and power). So far, most DOT research and applications have been geared toward qualitative characterization of hemoglobin content in spatially resolved static images.1 Although many significant results have been reported, one of the most important aspects of physiology, namely, maintenance of homeostasis through the action of dynamic processes, is largely neglected in these works. In contrast, a succession of reports from our group has described a broad range of dynamic physiological phenomena that could be accurately investigated using DOT methods.15, 16, 17, 18, 19 Reported instrumentation18 and numerical methods15, 16, 17, 19 for the collection and analysis of time-series image data also have been shown to have the ability to improve the quality of optical tomographic images, in terms of localization and contrast. We believe that the extension of DOT techniques from static to dynamic imaging is bound to lead to the development of improved techniques for early diagnosis and for treatment monitoring.

The limits of attainable image quality are among the key factors that will ultimately determine the practical utility of DOT (in either the static or dynamic imaging mode). As is the case for all imaging modalities, these limits depend strongly on the inherent stability of DOT to expected experimental uncertainties, and on the computational effort that is required for image recovery. Accordingly, a principal thrust of many of our previous studies has been to characterize the effect of experimental uncertainty on different DOT formulations, and to determine the computational effort required to produce clinically useful results. Thus, for example, we have shown that computational strategies aimed at estimating absolute optical coefficient values are markedly less stable than are estimates of relative changes.16, 17 We also showed that even when the image reconstruction effort is limited to solution of first-order perturbation equations—whose computational burden can be orders of magnitude lower than that for recursive iterative solutions—the image quality obtained from algorithms that analyze relative changes in optical coefficients over time is notably improved over similar estimates based on absolute coefficient values.

While these earlier results have been encouraging, the images obtained nevertheless have had relatively low spatial resolution compared to that achieved with more established imaging techniques. Recently, we described an image correction technique that, when applied to first-order reconstructed DOT images, markedly improves image quality without adding a significant computational burden.20, 21, 22, 23 The technique employs a linear spatial deconvolution step that serves to unmix information that has been mapped from the object to more than one location in the image domain. Notably, the approach taken, which is conceptually similar to the encoding of spatial information used20 in MRI, does not require prior knowledge of the target properties and is applicable to arbitrary media. We have also shown that the technique is robust, in that it can be effectively applied to a large number of different medium geometries and internal compositions, as well as to restricted illumination-detection configuration (e.g., back-reflection only) cases.22, 23

In the previous papers, however, deconvolution was applied only to target media with temporally static spatial distributions of optical coefficients. Here we extend our characterization of the image correction technique to media exhibiting dynamic phenomena similar to the types found in living tissue. It is our belief that examination of dynamic phenomena using time-series DOT will provide for more direct assessment of tissue-vascular coupling and, in particular, the specific mechanisms of autoregulation and autonomic control over the vascular bed. Results obtained here show that when combined with simple noise suppression techniques that enable exploration of low-frequency dynamic states, significant improvements in spatiotemporal accuracy of DOT images can be achieved with high computational efficiency.

2.

Methods

2.1.

Spatial Deconvolution Method

The reasoning that underlies our linear deconvolution strategy, and the mathematical details of its implementation, are given in Refs. 21, 22, 23. Here, we briefly describe the method in an intuitive way, which is compared to the procedure of calibration of a measurement system, with a focus on the computation and application of deconvolution operators.

Any measuring system is subject to the influence of systematic errors that serve to distort the derived information. The goal of calibration is to eliminate or to reduce the systematic errors. In the case of imaging systems, one form of error is the blurring effect caused by the occurrence of an “information spread function”22 of finite extent. Simply put, this causes information present in the object domain to be mapped to more than one location in the image domain. A standard strategy is to directly measure the spreading and to use this as a calibrating tool to correct for expected distortions.24

Here, we have adopted a conceptually similar approach, but have implemented a strategy that is well suited for arbitrary media and for use with numerical solvers. Consider a known optical coefficient distribution X0(r) in the spatial domain: X0(r)=[x01x02x0Nd]T , where the spatial domain is discretized by an Nd -node mesh, x0i is the optical coefficient value on the i th node, and superscript T represents the matrix transpose operation. Using the known distribution X0(r) as the input of the imaging system, we obtain the reconstructed optical coefficient distribution Xr(r)=[xr1xr2xrNd]T , where xri is the reconstructed optical coefficient on the i th node. Thus, the information-spreading properties of the imaging system can be obtained by solving the “calibration” equation X0(r)=FXr(r) , where F is an Nd×Nd matrix, which is called the deconvolution operator or image-correcting filter. As a practical matter, of course, comparison of a single X0(r) to the corresponding Xr(r) does not suffice to uniquely determine F . A large number of X0(r)-to-Xr(r) comparisons is needed.

In practice, generation of a deconvolution operator or image-correcting filter comprises four main steps. First, we assign each mesh node a time-dependent absorption and/or scattering coefficient. The functional form used here, as in Refs. 20, 21, 22, 23, is a set of sinusoids with incommensurate frequencies (in particular, f=1s1 , 2s1 , 3s1,, i.e., square roots of successive prime numbers), and whose amplitudes (ac) were equal to 8% of their mean (dc) values. These optical parameter functions are sampled at a constant interval Δt , until a total of Nt spatial distributions are recorded (for the examples reported on here, Δt=0.005s and Nt=16,384=214 ):

X0i(r)=[x01ix02ix0Ndi]T,i=1,2,,Nt.
Then, for each of these distributions a forward-problem solution is computed, using a specified fixed set of sources and detectors, such as the configuration sketched in Fig. 1 in Sec. 2(C). Images of the spatial distributions of medium properties at each of the Nt sample time points are reconstructed:
Xri(r)=[xr1ixr2ixrNdi]T,i=1,2,,Nt.
Finally, the original, or true, and reconstructed spatial distributions of medium optical parameters are accumulated in two Nd×Nt matrices Y=[X01X02X0Nt] and Ŷ=[Xr1Xr2XrNt] , and a deconvolution operator is computed by solving the linear system Y=FŶ , or

Eq. 1

[x011x012x01Ntx021x022x02Ntx0Nd1x0Nd2x0NdNt] =[f11f12f1Ndf21f22f2NdfNd1fNd2fNdNd][xr11xr12xr1Ntxr21xr22xr2NtxrNd1xrNd2xrNdNt],
where the Nd×Nd matrix F=[fij] is the deconvolution operator or image-correcting filter, which contains the contribution of each medium node to all the image pixels. Note that the system in Eq. 1 may be ill-conditioned, necessitating the use a regularization method in order to accurately compute F . [For the examples reported on here, the reconstructed image values in Ŷ were imported into Eq. 1 in a fixed-precision format, and this had a regularizing effect.]

Fig. 1

3-D FEM mesh, source-detector configuration, and heterogeneous test medium: (a) hemispheric mesh with 982 nodes, 4309 tetrahedral elements, and a diameter of 8cm , where 25 sources and 29 detectors are marked with small white circles; and (b) the heterogeneous test medium used in demonstrations of the efficacy of deconvolution at improving reconstructed image accuracy. The three projection planes show the positions and shapes of the inclusions.

051701_1_033505jbo1.jpg

After a deconvolution operator F is obtained by following the above sequence of steps for a given numerical mesh and source-detector geometry, then any image, Z=[zr1zr2zrNd]T subsequently reconstructed—from simulated or experimental or clinical data—by using the same mesh and source-detector geometry, and the same reconstruction algorithm (including regularization) as was used in generating Ŷ , can be deconvolved or corrected by simple matrix multiplication Zc=FZ :

Eq. 2

[zc1zc2zcNd]=[f11f12f1Ndf21f22f2NdfNd1fNd2fNdNd][zr1zr2zrNd].

2.2.

Solutions of Forward and Inverse Problems

Tomographic data for the simulated tissue models were acquired by using the finite element method to solve the diffusion equation, with a dc source term and Robin boundary conditions, as described in Refs. 16, 17.

The reconstruction algorithm that has been used to generate the results presented below seeks to solve a modified perturbation equation whose form is

Eq. 3

Wrδx=δRr,
where δx is the vector of differences between the optical properties (e.g., absorption and scattering coefficients) of a target (measured) and a defined reference medium; Wr is the Jacobian or weight matrix, which specifies the influence that each voxel has on the surface detectors for the selected reference medium; and δRr is proportional to the difference between detector readings obtained from the target in two distinct states (e.g., difference between data collected at two different instants, or the difference between instantaneous and time-averaged data).

As in Refs. 20, 21, 22, 23, here we use the normalized difference method16 to define the right-hand side of Eq. 3. Thus δRr is given by

Eq. 4

(δRr)i=(RR0)i(R0)i(Rr)i,
where Rr is the computed detector readings corresponding to a selected reference medium. For the filter-generating computations, R and R0 represent the detector readings at a specific time point and the time-averaged intensity, respectively. For the filter-testing computations, R and R0 are the detector readings computed for the heterogeneous target medium and homogeneous reference medium, respectively.

The weight matrix Wr is computed in the manner described in Ref. 24. For each combination of medium geometry and source-detector (S-D) configuration, a single set of weight matrices is used for all inverse problem computations. These are computed for a homogeneous reference medium having the same shape, size, and measurement geometry as the (heterogeneous) target.

Zero-order Tikhonov regularization, or ridge regularization, is used to stabilize the solution to Eq. 3, which formally is given by

Eq. 5

δx=(WrTWr+λI)1WrTδRr,
if Eq. 3 is overdetermined, and by

Eq. 6

δx=WrT(WrWrT+λI)1δRr,
if Eq. 3 is underdetermined, and λ is the regularization parameter (the numerical value used for all inverse-problem computations was λ=1.0 , except for the results shown in Fig. 7 in Sec. 3).

Fig. 7

Optimization of the regularization parameter. All images shown here have been deconvolved and denoised with a temporal LPF (threshold frequency 0.15Hz ): (a) λ=0.1 and (b) λ=10.0 . Noise model [Eq. 11] parameters are K0=5% and KW=10% , inclusion/background absorption contrast is 2.

051701_1_033505jbo7.jpg

A Levenberg-Marquardt (LM) algorithm25 is used to compute numerical solutions to Eq. 3. In these computations, the δx solved for includes position-dependent perturbations in both μa and D . No use is made of a priori information regarding the spatial distributions of either coefficient. Thus the dimensions of the quantities in Eq. 3 are Nf×(2Nd) for Wr , where Nf is the overall number of S-D channels, Nf×1 for δRr , and (2Nd)×1 for δx .

2.3.

Static and Dynamic Features of Targets

The test medium geometry and source-detector configuration used for all forward- and inverse-problem computations, both for generating deconvolution operators and for computing detector readings and reconstructing images of the dynamic test media, are shown in Fig. 1. The hemispheric finite element method (FEM) mesh shown in Fig. 1(a) approximates the measurement geometry for DOT mammographic studies. There are 29 detector locations on the mesh [only 14, marked with small white circles on the surface, are visible in Fig. 1(a)], and 25 of these also were used as sources, for a total of 725 S-D channels. Figure 1(b) shows the positions and shapes of three inclusions inside the heterogeneous test medium, in x-y , x-z , and y-z projection planes, which is used for testing the performance of the deconvolution strategy. For all computations considered in this paper, the absorption coefficient of the test medium’s background is μa=0.06cm1 , and the test medium has spatially homogeneous and temporally invariant scattering, with μs=10cm1 . The FEM mesh used for all inverse-problem computations contains 4309 tetrahedral elements and 982 nodes. The same coarse mesh is used for the filter-generating forward-problem computations [otherwise, the point-by-point comparisons represented by Eq. 1 would not be possible], while a finer mesh containing 10,305 tetrahedral elements and 2212 nodes is used for the forward-problem computations on the dynamic target media.

To explore dynamic characteristics of time-series images under deconvolution, four time-varying functions (sketched in Fig. 2 ) are assigned to the absorption coefficients of the test medium’s inclusions as follows. Shown in Fig. 2(a) is a sinusoidal time series:

Eq. 7

μa(t)=μa0+Δμacos(2πf0t+φ0);
in Fig. 2(b), an amplitude-modulated time series:

Eq. 8

μa(t)=μa0+Δμa[1+0.5sin(2πfat+φa)]cos(2πf0t+φ0);
in Fig. 2(c), a time-dependent frequency:

Eq. 9

μa(t)=μa0+Δμacos{2πf0[10.5sin(2πfmt+φm)]t+φ0};
and in Fig. 2(d), a combination of time-varying frequency and amplitude modulation:

Eq. 10

μa(t)=μa0+Δμa[1+0.5sin(2πfat+φa)] cos{2πf0[10.5sin(2πfmt+φm)]t+φ0}.
These four functional forms were chosen as models of the types of tissue dynamics known to result from varying metabolic demand and modulation of the vascular bed.26 Parameter values μa0=0.12cm1 , Δμa=0.024cm1 , f0=0.1Hz , fa=0.03Hz , fm=0.03Hz , φ0=0 , φa=0 , and φm=0 were used in calculating the curves in Fig. 2.

Fig. 2

Time series assigned to the optical coefficients of the test medium’s inclusions: (a) sinusoidal time series; (b) amplitude-modulated time series; (c) variable-frequency time series; and (d) amplitude-modulated and variable-frequency time series, where the four time-series curves correspond to Eqs. 7 to 10, respectively.

051701_1_033505jbo2.jpg

2.4.

Three-Dimensional Detector Noise Model

Gaussian noise was added to simulated detector readings in most of the studies considered in this paper, for the purpose of investigating the combined effects of noise and spatial deconvolution on the accuracy of dynamic information recovered from image time series. The noise-to-signal ratio of our detector noise model can be expressed by27

Eq. 11

σij=(NS)ij=K0+(KWK0)(dijW)4,
where dij is the distance between the i th source and the j th detector, W is the maximal distance between sources and detectors [i.e., W=max(dij) ], K0 is the noise-to-signal ratio for a colocated source and detector, and KW stands for the noise-to-signal ratio when dij=W . The functional form of Eq. 11, and the numerical value of the exponent, are empirically derived, not deduced from theoretical considerations. However, as a noise model it is in good agreement with our usual experimental and clinical experience.18, 28

To quantitatively analyze the effect of noise on spatial and temporal accuracy of reconstructed images, we here define six noise levels:

level1:K0=0.5%andKW=5%,level2:K0=1.0%andKW=10%,level3:K0=2.0%andKW=20%,level4:K0=3.0%andKW=30%,level5:K0=4.0%andKW=40%,level6:K0=5.0%andKW=50%.
Figure 3 shows the S-D distance dependence of the noise-to-signal ratios of these six noise levels for the S-D configuration shown in Fig. 1(a). In Sec. 3, the impact of noise level on the spatial and temporal accuracy of images is explored.

Fig. 3

Variation in noise-to-signal (N/S) ratio with distance between source and detector locations. The N/S ratio increases with distance, as described by Eq. 11, in agreement with usual experimental and clinical experience. Curves 1 to 6 correspond to the six noise levels defined in Sec. 2(D), respectively.

051701_1_033505jbo3.jpg

In addition to the six levels just enumerated, some other cases that use Eq. 11 as the noise model but have KWK0 ratios different from 10:1 were used to generate some of the image results that are presented subsequently (e.g., Figs. 6 and 7 in Sec. 3). Levels 1 to 6, however, are used to define a 1-D scale for plots of image characteristics versus noise magnitude (e.g., Figs. 9 and 11 in Sec. 3).

Fig. 6

Effect of temporal and spatial LPF denoising: (a) deconvolved image, (b) deconvolved+temporal low-pass filtered image, and (c) deconvolved+temporal+spatial low-pass filtered image. The noise model [Eq. 11] parameters are K0=5% and KW=10% , absorption contrast of inclusions is 2, and the regularization parameter is λ=1.0 .

051701_1_033505jbo6.jpg

Fig. 9

Noise dependence of spatial accuracy for denoised images: ‘+’ and ‘엯’ symbols, SC plots for images that have not been deconvolved; ‘*’ and ‘●’ symbols, SC plots for deconvolved images; dashed lines, only a temporal LPF is used for denoising; solid lines, both temporal and spatial LPFs are used for denoising. The absorption contrast is 2.4 in all cases.

051701_1_033505jbo9.jpg

Fig. 11

Noise dependence of temporal accuracy for denoised images: ‘+’ and ‘엯’ symbols, TC plots for images that have not been deconvolved; ‘*’ and ‘●’ symbols, TC plots for deconvolved images; dashed lines, only a temporal LPF is used for denoising; solid lines, both temporal and spatial LPFs are used for denoising; time-averaged contrast is 2 and the normalized modulation amplitude is 0.2.

051701_1_033505jbo11.jpg

The presence of noise in measured data degrades the quality of recovered images. As seen in Sec. 3, some simple denoising techniques, such as temporal low-pass filtering,29 spatial low-pass (or long-pass) filtering,30 and optimization of regularization parameters31 were used to denoise deconvolved images for an additional improvement in image quality.

2.5.

Quantitative Assessments of Spatial and Temporal Accuracy

Quantitative assessment of image quality is an essential aspect of characterizing the practicality of a reconstruction algorithm. Here we use the spatial and temporal correlations between target medium and reconstructed images as the indices of spatial and temporal accuracy, respectively, of recovered images.19 The spatial correlation (SC) is defined as

Eq. 12

c(t0)uv=1Nd1i=1Nd(uiu¯su)(viv¯sv),
where ui=u(xi,yi;t0) are the true values of the contrast parameter in the target medium, vi=v(xi,yi;t0) are the reconstructed values of the same contrast parameter, u¯ and v¯ are the spatial mean values of u and v , and su and sv are the spatial standard deviations. The summation runs over all Nd mesh nodes. The temporal correlation (TC) is defined as

Eq. 13

c(x0,y0)uv=1Nt1i=1Nt(uiu¯su)(viv¯sv),
where ui=u(x0,y0;ti) and vi=v(x0,y0;ti) are contrast parameter values of the target medium and reconstructed image, respectively, and the summation runs over all Nt time points. Here, u¯ and v¯ are temporal mean values, and su and sv are temporal standard deviations. As already noted, for all simulation studies conducted for this paper, the optical coefficients of the target medium’s background region were static, i.e., su=0 . Consequently, the TC between target and image is mathematically undefined in this region, and all subsequently reported TC values are spatial averages over the inclusion volume only.

2.6.

Note Regarding Presentation of 3-D Graphics

Results presented below include several 3-D reconstructed images (Figs. 4, 5, 6, 7 in Sec. 3) that are rendered as contour-surface plots, with all μa values below a threshold percentage of the maximum recovered μa set to zero. This mode of presentation has the potential to create the suspicion that seemingly positive results are only a consequence of adjusting the threshold until the desired appearance is achieved. That such is not the case here is proved by inspection of the supplemental figures presented in the appendix. There a different mode of presentation [i.e., a sequence of two-dimensional (2-D) cuts through the 3-D image] that does not require specification of a threshold is used. Space limitations do not permit the use of the 2-D slicing technique for all of the 3-D results shown in Figs. 4, 5, 6, 7 in Sec. 3.

Fig. 4

(a) Image reconstructed from noise-free detector readings—uncorrected (top row) and deconvolved (bottom row), where numbers along gray-level scale are μa values—and (b) time-dependent SC of the reconstructed image time series for two periods of the sinusoidally varying μa in the inclusions—solid and dashed curves correspond to uncorrected and deconvolved images, respectively.

051701_1_033505jbo4.jpg

Fig. 5

(a) Image reconstructed from level 2 noise-added detector readings: uncorrected (5(a.1)), denoised (temporal low-pass filter) but not deconvolved (5(a.2)), deconvolved but not denoised (5(a.3)), and deconvolved and denoised (5(a.4)), where numbers along color bar are μa values. (b) Time-dependent SC of the reconstructed image time series for two periods of the inclusions’ sinusoidal μa(t) fluctuation: blue, green, red, and black curves correspond to uncorrected, denoised-only, deconvolved-only, and deconvolved+denoised images, respectively.

051701_1_033505jbo5.jpg

3.

Results

Qualitative and quantitative assessments of the effectiveness of the linear deconvolution method applied to image time series are presented in this section. In the first example, in which noise-free data were used, comparisons between convolved and deconvolved images are shown in Fig. 4. A sinusoidal pattern of temporal variation [Eq. 7, Fig. 2(a)], with dynamic feature parameters μa0=0.12cm1 , Δμa=0.024cm1 , f0=0.1Hz , and φ0=0 , was assigned to the absorption coefficients of all three inclusions in the test medium in this example. In Fig. 4(a), we can see that, in agreement with results presented in Ref. 21, image quality at a specific time point is markedly improved by deconvolution. It is further seen in Fig. 4(b) that the time-dependent spatial accuracy of the deconvolved image time series, as quantified by the SC index described in Sec. 2(E), also is significantly larger than that of the uncorrected images. With regard to temporal accuracy, the TC index differs from unity by only a few tenths of a percent, for both the uncorrected and deconvolved image time series, in this noise-free case (see Fig. 11 later in this section).

A logical and important next step is to determine the effect of noise in the detector data on the spatial and temporal accuracy. Figure 5 shows results for the case in which the three inclusions had the same dynamic feature parameters as those used for the noise-free example, and level 2 (see Sec. 2(D)) Gaussian noise was added to detector measurements. In Fig. 5(a) we see that deconvolution (5(a.3)) improves the spatial resolution and localization of inclusions in the image recovered at a specific time point, relative to that in the uncorrected image (5(a.1)); at the same time, as noted in Ref. 21, the noise leads to the appearance of spurious absorption coefficient perturbations, especially in regions of the image lying near the curved hemispheric surface. However, when a simple temporal low-pass filter (LPF) with a cutoff frequency of 0.15Hz is applied to the deconvolved image (5(a.4)), the noise artifacts are almost completely eliminated. On the other hand, use of the LPF in the absence of deconvolution (5(a.2)) produces some reduction in peripheral noise artifact levels, but no noticeable improvement in either spatial resolution or quantitative accuracy. Figure 5(b) reinforces and extends the preceeding result: the SC between medium and image is greater for the deconvolved (red curve) than for the uncorrected image (blue curve) at some time points, and lower at others, but fluctuates about the same average value (0.2) and is never larger than 0.4; following subsequent application of the temporal LPF (black curve), the SC is almost always greater than 0.6. The SC for images that are denoised but not deconvolved (green curve) is only marginally higher than that for the uncorrected images, again showing that spatial deconvolution is a required operation for enhancing the image quality.

For the same computational study that gave the results shown in Fig. 5, a qualitatively different trend was obtained with respect to temporal accuracy, with substantially lower TCs found in the deconvolved than in the uncorrected time series. Denoising with a temporal LPF produces a secondary increase in the TC, but its final value is lower than that in the uncorrected image (see Table 1 , first row). These observations begin to suggest that some trade-off between spatial and temporal accuracy may be inevitable when our deconvolution method is used, but that the degree of reduction in temporal accuracy can be contained within acceptable limits. A plausible mechanism for the effect of noise on the TC, more fully described in Sec. 4, is an amplifying effect of deconvolution on noise. The conjectured effect would result from the action of the deconvolution operator, which is to replace the original reconstructed image value at each FEM mesh node with some linear combination of the values, including any noise that may be present, at all nodes.

Table 1

Temporal correlations for different dynamical features of inclusions (noise: level 2).

Without LPFWith Temporal LPF
Dynamic Features1Before Deconv.After Deconv.Before Deconv.After Deconv.
10.82310.24780.95220.6729
20.83520.26800.95480.6928
30.81720.24760.69660.4244
40.83320.26920.76850.5200
50.80780.27260.79750.5931

1 1= Eq. 7 for all three inclusions, 2= Eq. 8 for all, 3= Eq. 9 for all, 4= Eq. 10 for all, 5= Eqs. 7, 8, 9 for one inclusion apiece.

To further characterize the performance of the deconvolution method on noisy image time series, with a view toward minimizing the TC reduction observed in the previous study, we have further investigated the effects of the three elementary denoising techniques named in Sec. 2(D): temporal low-pass filtering, spatial low/long-pass filtering, and optimization of the regularization parameter. A sinusoidal time series expressed by Eq. 7 was assigned to the inclusions’ absorption coefficients, with dynamic feature parameters μa0=0.06cm1 , Δμa=0.06cm1 , f0=0.1Hz , and φ0=0 . Prior to image reconstruction, Gaussian noise with noise model [Eq. 11] parameters of K0=5% and KW=10% was added to the detector data. All images shown in Figs. 6 and 7 are for time point t=T(10s) , at which time all inclusions have a μa value twice that of the background.

Figure 6 presents the deconvolved image before [Fig. 6(a)] and after [Fig. 6(b)] denoising with a “pillbox” LPF (Ref. 29) whose threshold frequency is 0.15Hz ( 0.15Hz is the threshold frequency, as well, in all subsequent results involving use of temporal LPFs). Figure 6(c) shows the result of applying a second denoising operation, in this case, a spatial LPF that computes a weighted average of the μa value on a given node and on the surrounding nodes.30 In Fig. 7, denoising realized by optimizing the regularization parameter31 λ is demonstrated; both images shown have been spatially deconvolved and denoised with temporal LPFs. Figure 7(a) shows the final result obtained when Eq. 3 is solved with λ=0.1 , and Fig. 7(b) is the corresponding result for λ=10 . The results in Figs. 6 and 7 suggest that the best final result in terms of spatial resolution, localization, and reduction of noise artifacts is produced by using all three of the denoising methods examined here. One trade-off apparent from inspection of the grayscales in these figures is some additional reduction in quantitative accuracy with each additional operation. We next turn our attention to the question of the impact of different types of denoising on the TC and SC of image time series.

The dependence of SC on inclusion absorption contrast (i.e., μainclμabkgr , where μaincl and μabkgr are the inclusion and background μa , respectively; for the results presented here, μabkgr=0.06cm1 ), for images that have been denoised by a temporal LPF, is shown in Fig. 8 . Here the solid curves are results obtained for level-2 noise, before and after deconvolution, respectively; the dashed curves are the analogous results for level-3 noise. Each plotted point in the figure is an average of six SC values, computed for images reconstructed from data taken from six successive sinusoidal periods (one time point per cycle). From these curves we clearly see that the final image’s SC grows with increasing inclusion contrast, but saturates when the contrast is > 2.5 . This is in agreement with our previous results23 and with those of other groups.31, 32 These results also unambiguously demonstrate the effectiveness of deconvolution at improving qualitative image accuracy.

Fig. 8

Contrast dependence of spatial accuracy for deconvolved-only (‘+’ and ‘엯’ symbols) and for deconvolved+denoised (‘*’ and ‘●’ symbols) images. The solid curves correspond to level 2 noise, and the dashed curves to level 3 noise.

051701_1_033505jbo8.jpg

Figure 9 shows the dependence of SC on noise level, for reconstructed images denoised by only temporal or by both temporal and spatial LPFs, for a fixed inclusion contrast of 2.4. The noise levels used here are defined in Sec. 2(D). The curves with ‘*’ and ‘●’ symbols are the results for images that have been deconvolved and denoised, while the curves with ‘+’ and ‘엯’ symbols are the results for images that have been denoised but not deconvolved. The dashed curves are results for images denoised with only a temporal LPF, and the solid curves are those for images denoised with both temporal and spatial LPFs. The SCs indicate that spatial accuracy is substantially improved by the deconvolution+filtering operations, and that even at a relatively high noise level (e.g., level 3), high spatial accuracy [e.g., c(t0)uv0.6 for noise level 3] is still achieved.

The dependence of TC on normalized modulation amplitude, for deconvolved images denoised by a temporal LPF, is plotted in Fig. 10 . Sinusoidal temporal behavior expressed by Eq. 7 was assigned to the μa of the inclusions, with dynamic feature parameters μa0=0.12cm1 , f0=0.1Hz , and φ0=0 ; the definition of normalized modulation amplitude is Δμaμa0 . The solid and dashed curves are plots of TC versus Δμaμa0 for level 2 and level 3 noise, respectively. These results demonstrate that the temporal accuracy of deconvolved+denoised image time series increases with increasing modulation amplitude, and that a relatively high temporal accuracy [c(x0,y0)uv0.6] can be obtained even when Δμaμa0 is as low as 0.2.

Fig. 10

Amplitude dependence of temporal accuracy for deconvolved+denoised (temporal LPF) images: solid curve; level 2 noise; dashed curve, level 3 noise; time-averaged contrast is 2.

051701_1_033505jbo10.jpg

Figure 11 shows the dependence of TC on noise level for reconstructed images denoised by only temporal or by both temporal and spatial LPFs. The same sinusoidal time series changes as used for Fig. 5 were assigned to inclusions’ absorption coefficients in this case. The four sets of conditions corresponding to the plotted curves (i.e., ±deconvolution , ±spatial LPF) are the same as those considered in Fig. 9, with corresponding curves symbol- and line-style-matched between Figs. 9 and 11. From these TC versus noise-level plots, we find that the deconvolution operation has no effect on temporal accuracy when the detector data are noise-free (noise level 0), but that temporal accuracy degrades rapidly with increasing noise level. However, a relatively high temporal accuracy [c(t0)uv0.7] still is seen in the deconvolved images at noise level 3, which typifies experimental noise levels in data collected with our instrumentation.18 In light of the significant improvement in spatial accuracy resulting from deconvolution, at the same noise level (see Fig. 9), the degradation of temporal information seen in Fig. 11 may be deemed an acceptable trade-off. Alternatively, temporal and spatial content can be extracted from a time series at different stages of the reconstruction process.

Finally, comparisons between TCs of image time series for the four different dynamic features assigned to the inclusions [Eqs. 7 to 10] are summarized in Table 1. Here the dynamic features 1 [Eq. 7], 2 [Eq. 8], 3 [Eq. 9], and 4 [Eq. 10] correspond to Figs. 2(a), 2(b), 2(c), and 2(d), respectively. Dynamic feature 5 refers to a three-inclusion medium with a different time-varying function [i.e., Eqs. 7, 8, and 9] assigned to each inclusion. The table gives TC data for the case of level 2 noise. Consistent with earlier observations, application of a temporal LPF ameliorates the loss of TC engendered by spatial deconvolution, but does not restore it all the way to the predeconvolution level. Inspection of the data in Table 1 also reveals that the attainable temporal accuracy falls as the dynamic properties of the medium become increasingly complex.

4.

Discussion and Conclusions

The results presented here constitute an extension of an ongoing effort that, we believe, can have the effect of substantially improving the quality, and hence interpretability, of DOT image data. As is more completely laid out in earlier papers,20, 21, 22, 23 there is an intimate connection between the spatial deconvolution technique that is explored in this paper and our development of and emphasis on dynamic functional imaging with DOT over the past 6 to 7yr (Refs. 15, 16, 17, 18, 19, 27, 28). A requirement for successful adoption of dynamic DOT is development of methods for rapid recovery of large numbers of images. This necessity for fast imaging was a principal reason for our reliance on first-order reconstruction algorithms based on linear perturbation equations,16, 17 thereby sacrificing the improved spatial resolution and quantitative accuracy that in some cases can be achieved by using computation-intensive iterative, nonlinear methods.25, 33, 34 The discovery that the algorithms we used enabled us to recover dynamic properties of tissue structures and tissue-like media with far more accuracy than that obtained for the optical coefficients at a particular time frame,15 combined with the conviction, born of knowledge of physiology and pathophysiology, that the former type of information is the more clinically valuable, made the trade-off between speed and quality of individual images acceptable.

More recently, we recognized that our rapid time-series imaging capability afforded us a previously unavailable method for studying the action of various image reconstruction algorithms: temporal fluctuations are introduced into the optical coefficients of light-diffusing media; spatial location is encoded by varying the functional form of the fluctuations in a position-dependent manner; application of analysis methods, already available to us,15, 19, 27 to the time series of reconstructed images precisely reveal the manner in which spatial information of the medium is represented in the image domain. Increasing experience with this approach led to a further insight: careful comparison of the medium and images might enable one to derive a mathematical operator that can correct for errors made by the reconstruction algorithm in the assignment of medium spatial information to the image. Adding to the appeal of the proposed spatial deconvolution approach was the recognition that while computation of such an operator might have substantial CPU and memory requirements, these would be completely independent from, and could precede, reconstruction of the image(s) to which it would be applied. The postreconstruction computational burden, in contrast to that of nonlinear reconstruction algorithms, might be as little as a single matrix multiplication. That is, if the deconvolution approach proved successful, it could constitute a computationally efficient method for producing individual-time-frame images of a quality comparable to that obtained by using the nonlinear reconstruction algorithms. That method could, furthermore, have applicability not only to the problem area that is of particular interest to us, but also to essentially any field in which a linear transformation is used to convert sets of observations or measurements into interpretable results.

In several earlier papers we have showed that the deconvolution approach does, in fact, enhance image quality to the extent that we anticipated, in both 2-D (Refs. 21, 23) and 3-D (Ref. 22) instances of DOT. It has been found equally successful in full-view tomographic, partially restricted-view (e.g., 3-D hemispheric medium with sources and detectors distributed about the curved surface, but none on the planar surface), and single-view (e.g., 2-D rectangular media with transmission-only or reflection-only source-detector arrangements, 3-D slab with reflection-only arrangement) test cases. In the examples presented in the cited works, the deconvolved images were remarkably accurate in terms of the identity, number, location, shape, and size of inclusions, and frequently also in terms of the quantitative value of the recovered optical coefficient. At the same time, some of the uncorrected optical coefficient images, especially in the reflection-only cases, bore almost no resemblance to the target medium. However, we recognized the ways in which those same examples were idealized. Two of particular importance are (1) the target media considered before were static, which made it impossible to examine the question of whether and to what degree the spatial corrections are accompanied by degradation of temporal information in the image time series; and (2) the impact of noise in detector data, and ways of ameliorating its effects, were touched on only superficially. For this paper, we have extended the earlier studies by examining these two issues in detail.

The principal conclusions to be drawn from the new results presented here are (1) application of our spatial deconvolution method does lead to a reduction in the accuracy of recovered dynamic-feature information, but the degree of reduction is highly noise-level-dependent (see Fig. 11); (2) noise also degrades the spatial accuracy of optical coefficient images at individual time frames; and (3) elementary spatial and temporal denoising methods can, especially when used in tandem, almost eliminate the second problem and substantially ameliorate the first. While the accuracy of temporal information is never as high after deconvolution as before, except in the case of noise-free data, it remains usefully high even at levels of noise that are typical of what we find in experimental data taken with our dynamic DOT instrumentation.36

This rapid loss of temporal accuracy with increasing noise level is understandable and predictable. Noise in detector data causes artifacts in the reconstructed images and these artifacts are amplified by deconvolution; the amplification occurs because the forward-problem solutions used to generate the deconvolution operator are noise-free, and so all image information, including the artifacts, is treated as if it were real (i.e., noise-free) information and is “corrected” by the deconvolution operator. The mathematical effect of that operator is to replace the reconstructed image value at each FEM mesh node with some linear combination of the values at all nodes, and the sum of noise contributions from all the nodes causes a reduction in the TC between the true and recovered time series. As shown here, direct suppression of noise can be an effective method of reducing the degradation in temporal accuracy. In addition to the simple LPF technique considered here, our plans for future studies include examining the effects of many other well-established denoising techniques, including wavelets37 and von Hann filters.38 Another approach that will be investigated in tandem is to further explore optimization of parameters that already have been shown to affect the performance of deconvolution operator. These include the number of sampled optical coefficient distributions and the amplitude of sinusoidally time-varying optical coefficients.22 It may be possible to generate deconvolution filters that are less sensitive to noise via optimization of these parameters.

Our emphasis on presenting results for simple sinusoidal temporal fluctuations in the inclusions’ μa might strike some as uninteresting, but has a sound physiological basis: an important goal in clinical dynamic DOT studies is to follow and quantify spatiotemporal vasomotor fluctuations; these are low-frequency (0.2Hz) , narrowband (i.e., approximately sinusoidal) phenomena.39 The technique employed here, of applying a LPF whose threshold frequency is just 0.05Hz higher than the frequency of the μa perturbation, therefore mimics the manner in which we actually treat experimental data.

We further anticipate that some may wonder whether the effect of denoising via a temporal LPF depends on whether it is applied prior to image reconstruction, between reconstruction and deconvolution, or, as in the examples presented here, after deconvolution. We examined the issue and found that there is no effect; the final image is exactly the same, whether denoising is the first, second, or third operation carried out. In retrospect, this outcome is not surprising, because the LPF implementation we used is linear in the time domain.

It was previously observed22 that the success of the spatial deconvolution approach has an important implication with respect to the origin of the low spatial resolution frequently seen in DOT images reconstructed with linear algorithms. Namely, it is only for large optical-coefficient perturbations, relative to the reference medium, that the nonlinear relationship between medium properties and detector data becomes the primary source of error in the image. More commonly, linear convolution of spatial information introduced by the reconstruction algorithm is the more important factor. We fully expect, however, that the most generally useful, and computationally efficient, reconstruction strategy is to combine the (linear) deconvolution and iterative (nonlinear) strategies, applying them in an alternating manner. We anticipate that this would enable successful reconstruction of images of media with almost any type and magnitude of optical coefficient perturbations, in a time frame acceptable for clinical applications. A hybrid algorithm that is based on the reconstruction methods we have previously described16, 17 would have the additional benefit of their demonstrated robustness to known, not easily eliminated, types of measurement error.

To clarify one final point, note that a blind deconvolution method in DOT was recently studied by Jefferies 35 and by Matson.29 Although our scheme and their method are both called “deconvolution,” the meanings of deconvolution are very different in our usage and in theirs. The goal of the blind deconvolution method is to deblur the 2-D image that is produced by subtracting, from the blurred target measurement, a second measurement of the same turbid medium without a target present, while that of our spatial deconvolution is to obtain a complete 2-D or 3-D reconstruction of the target in the turbid medium.

In summary, we investigated the effectiveness of the linear deconvolution method applied to image time series reconstructed by solving a first-order perturbation equation. From the qualitative and quantitative results obtained in this report, we can reach the following conclusions. First, for noise-free or low-noise ( K0=0.5% and KW=5% ) data, high spatial and temporal accuracy are achieved by the deconvolution method. Second, simple time-series features (e.g., sinusoidal) are easier to recover than complex time-series features (e.g., variable frequency). Third, for noisy data, deconvolution procedures can significantly improve the spatial accuracy of time-series images, but the temporal accuracy is concomitantly degraded. Fourth, denoising techniques can enhance the performance of the deconvolution method. Finally, combined with temporal and spatial LPFs, satisfactory spatial and temporal accuracy (spatial correlations 0.6 and temporal correlations 0.7 ) can be obtained by use of the deconvolution method for noisy data at noise levels typical of experimental data ( K0=2% and KW=20% ).

5.

Appendix

As indicated in Sec. 2(F), here we show an alternative representation of the 3-D reconstructed images shown in Fig. 5(a.3) and Fig. 5(a.4). Each panel of Fig. 12 contains three mutually orthogonal 2-D sections (upper left, lower left, and lower right subfigures) intersecting at a point in the 3-D image, while the upper right subfigure shows the orientations of the 2-D sections and the point of intersection. Figures 12(a) and 12(b) show sections through the deconvolved-only image [compare to Fig. 5(a.3)], and Fig. 12(c) and 12(d) show the matching sections through the deconvolved+denoised image [compare to Fig. 5(a.4)].

Fig. 12

Alternative representation of the 3-D reconstructed images shown in Fig. 5(a.3) and 5(a.4). Sections (a) and (b) through the deconvolved-only image and (c) and (d) through the deconvolved+denoised image. Numbers along color bars are μa values.

051701_1_033505jbo12.jpg

There is no thresholding in the graphical presentation mode used in Fig. 12. Each plotted 2-D section contains the entire range of recovered image values within that section. From inspection of these results, it is apparent that the corrected image produced by applying the spatial deconvolution method outlined in this paper shows sharp transitions between the inclusions and the surrounding background medium. Thus the appearance of well-localized, correctly sized inclusions in Figs. 4, 5, 6, 7 is not an artifact of the threshold-level selection process.

Acknowledgments

This work was supported by the National Institute of Health (NIH) under Grants R21-HL67387, R21-DK63692, R41-CA96102, and R43-NS49734, and by the U.S. Army under Grant No. DAMD017-03-C-0018.

References

1. 

B. Chance, R. R. Alfano, B. J. Tromberg, M. Tamura, and E. M. Sevick-Muraca, Proc. SPIE, 5693 (2005). 0277-786X Google Scholar

2. 

S. Colak, M. van der Mark, G. tHooft, J. Hoogenraad, E. van der Linden, and F. Kuijpers, “Clinical optical tomography and NIR spectroscopy for breast cancer detection,” IEEE J. Sel. Top. Quantum Electron., 5 1143 –1158 (1999). https://doi.org/10.1109/2944.796341 1077-260X Google Scholar

3. 

V. Ntziachristos, A. Yodh, M. Schnall, and B. Chance, “Concurrent MRI and diffuse optical tomography of breast after indocyanine geen enhancement,” Proc. Natl. Acad. Sci. U.S.A., 97 2767 –2772 (2000). https://doi.org/10.1073/pnas.040570597 0027-8424 Google Scholar

4. 

B. W. Pogue, S. P. Poplack, T. O. McBride, W. A. Wells, K. S. Osterman, U. L. Osterberg, and K. D. Paulsen, “Quantitative hemoglobin tomography with diffuse near-infrared spectroscopy: pilot results in the breast,” Radiology, 218 (1), 261 –266 (2001). 0033-8419 Google Scholar

5. 

H. Jiang, Y. Xu, N. Iftimia, J. Eggert, K. Klove, L. Baron, and L. Fajardo, “Three-dimensional optical tomographic imaging of breast in a human subject,” IEEE Trans. Med. Imaging, 20 1334 –1340 (2001). https://doi.org/10.1109/42.974928 0278-0062 Google Scholar

6. 

D. A. Boas, G. Strangman, J. P. Culver, R. D. Hoge, G. Jasdzewski, R. A. Poldrack, B. R. Rosen, and J. B. Mandeville, “Can the cerebral metabolic rate of oxygen be estimated with near-infrared spectroscopy?,” Phys. Med. Biol., 48 2405 –2418 (2003). 0031-9155 Google Scholar

7. 

H. Obrig and A. Villringer, “Beyond the visible—imaging the human brain with light,” J. Cereb. Blood Flow Metab., 23 1 –18 (2003). https://doi.org/10.1097/00004647-200301000-00001 0271-678X Google Scholar

8. 

A. Y. Bluestone, G. Abdoulaev, C. H. Schmitz, R. L. Barbour, and A. H. Hielscher, “Three-dimensional optical tomography of hemodynamics in the human head,” Opt. Express, 9 272 –286 (2001). 1094-4087 Google Scholar

9. 

J. C. Hebden, “Advances in optical imaging of the newborn infant brain,” Psychophysiology, 40 501 –510 (2003). https://doi.org/10.1111/1469-8986.00052 0048-5772 Google Scholar

10. 

Y. Xu, N. Iftimia, H. Jiang, L. L. Key, and M. B. Bolster, “Three-dimensional diffuse optical tomography of bones and joints,” J. Biomed. Opt., 7 88 –92 (2002). https://doi.org/10.1117/1.1427336 1083-3668 Google Scholar

11. 

A. H. Hielscher, A. D. Klose, A. K. Scheel, M. Backhaus, U. J. Netz, and J. Beuthan, “Assessment of finger joint inflammation by diffuse optical tomography,” Proc. SPIE, 5138 46 –54 (2003). 0277-786X Google Scholar

12. 

A. M. Seigel, J. P. Culver, J. B. Mandeville, and D. Boas, “Temporal comparison of functional brain imaging with diffuse optical tomography and fMRI during rat forepaw stimulation,” Phys. Med. Biol., 48 1391 –1403 (2003). https://doi.org/10.1088/0031-9155/48/10/311 0031-9155 Google Scholar

13. 

A. Y. Bluestone, M. Stewart, J. Lasker, G. Abdoulaev, and A. H. Hielscher, “Three-dimensional optical tomographic brain imaging in small animals, part 1: hypercapnia,” J. Biomed. Opt., 9 1046 –1062 (2004). https://doi.org/10.1117/1.1784471 1083-3668 Google Scholar

14. 

B. W. Pogue, S. Jiang, H. Dehghani, C. Kogel, S. Srinivasan, X. Song, T. D. Tosteson, S. P. Poplack, and K. D. Paulsen, “Characterization of hemoglobin, water, and NIR scattering in breast tissue: analysis of intersubject variability and menstrual cycle changes,” J. Biomed. Opt., 9 541 –552 (2004). https://doi.org/10.1117/1.1691028 1083-3668 Google Scholar

15. 

R. L. Barbour, H. L. Graber, Y. Pei, S. Zhong, and C. H. Schmitz, “Optical tomographic imaging of dynamic features of dense-scattering media,” J. Opt. Soc. Am. A, 18 3018 –3036 (2001). 0740-3232 Google Scholar

16. 

Y. Pei, H. L. Graber, and R. L. Barbour, “Influence of systematic errors in reference states on image quality and on stability of derived information for DC optical imaging,” Appl. Opt., 40 5755 –5769 (2001). 0003-6935 Google Scholar

17. 

Y. Pei, H. L. Graber, and R. L. Barbour, “Normalized-constraint algorithm for minimizing inter-parameter crosstalk in dc optical tomography,” Opt. Express, 9 97 –109 (2001). 1094-4087 Google Scholar

18. 

C. H. Schmitz, M. Löcker, J. M. Lasker, A. H. Hielscher, and R. L. Barbour, “Instrumentation for fast functional optical tomography,” Rev. Sci. Instrum., 73 429 –439 (2002). https://doi.org/10.1063/1.1427768 0034-6748 Google Scholar

19. 

H. L. Graber, Y. Pei, and R. L. Barbour, “Imaging of spatiotemporal coincident states by dc optical tomography,” IEEE Trans. Med. Imaging, 21 852 –866 (2002). 0278-0062 Google Scholar

20. 

H. L. Graber, R. L. Barbour, and Y. Pei, “Quantification and enhancement of image reconstruction accuracy by frequency encoding of spatial information,” Proc. OSA Biomedical Topical Meetings, OSA Technical Digest, 635 –637 (2002) Google Scholar

21. 

R. L. Barbour, H. L. Graber, Y. Xu, Y. Pei, and R. Aronson, “Strategies for imaging diffusing media,” Transp. Theory Stat. Phys., 33 361 –371 (2004). 0041-1450 Google Scholar

22. 

H. L. Graber, Y. Xu, Y. Pei, and R. L. Barbour, “Spatial deconvolution technique to improve the accuracy of reconstructed three-dimensional diffuse optical tomographic images,” Appl. Opt., 44 941 –953 (2005). https://doi.org/10.1364/AO.44.000941 0003-6935 Google Scholar

23. 

Y. Xu, H. L. Graber, Y. Pei, and R. L. Barbour, “Improved accuracy of reconstructed diffuse optical tomographic images by means of spatial deconvolution: two-dimensional quantitative characterization,” Appl. Opt., 44 2115 –2139 (2005). https://doi.org/10.1364/AO.44.002115 0003-6935 Google Scholar

24. 

D. J. Goodenough, “Tomographic imaging,” Handbook of Medical Imaging, SPIE Press, Bellingham, WA (2000). Google Scholar

25. 

K. D. Paulsen and H. Jiang, “Spatially-varying optical property reconstruction using a finite element diffusion equation approximation,” Med. Phys., 22 691 –702 (1995). https://doi.org/10.1118/1.597488 0094-2405 Google Scholar

26. 

T. M. Griffith, “Temporal chaos in the microcirculation,” Cardiovasc. Res., 31 342 –358 (1996). https://doi.org/10.1016/0008-6363(95)00147-6 0008-6363 Google Scholar

27. 

H. L. Graber, Y. Pei, R. L. Barbour, D. K. Johnston, Y. Zheng, and J. E. Mayhew, “Signal source separation and localization in the analysis of dynamic near–infrared optical tomographic time series,” 31 –51 (2003). Google Scholar

28. 

R. L. Barbour, H. L. Graber, C. H. Schmitz, Y. Pei, S. Zhong, S. -L. S. Barbour, S. Blattman, and T. Panetta, “Spatio-temporal imaging of vascular reactivity by optical tomography,” Proc. Inter-Institute Workshop on In Vivo Optical Imaging at the NIH, 161 –166 (1999) Google Scholar

29. 

C. L. Matson, “Deconvolution-based spatial resolution in optical diffusion tomography,” Appl. Opt., 40 5791 –5801 (2001). 0003-6935 Google Scholar

30. 

H. Jiang, “Frequency-domain fluorescent diffusion tomography: a finite-element-based algorithm and simulations,” Appl. Opt., 37 5337 –5343 (1998). 0003-6935 Google Scholar

31. 

J. P. Culver, V. Ntziachristos, M. J. Holboke, and A. G. Yodh, “Optimization of optode arrangements for diffuse optical tomography: a singular-value analysis,” Opt. Lett., 26 701 –703 (2001). 0146-9592 Google Scholar

32. 

X. Song, B. W. Pogue, S. Jiang, M. M. Doyley, H. Dehghani, T. D. Tosteson, and K. D. Paulsen, “Automated region detection based on the contrast-to-noise ratio in near-infrared tomography,” Appl. Opt., 43 1053 –1062 (2004). https://doi.org/10.1364/AO.43.001053 0003-6935 Google Scholar

33. 

S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl., 15 R41 –R93 (1999). https://doi.org/10.1088/0266-5611/15/2/022 0266-5611 Google Scholar

34. 

A. H. Hielscher, A. D. Klose, and K. M. Hanson, “Gradient-based iterative image reconstruction scheme for time-resolved optical tomography,” IEEE Trans. Med. Imaging, 21 262 –271 (1999). 0278-0062 Google Scholar

35. 

S. M. Jefferies, K. J. Schulze, C. L. Matson, K. Stoltenberg, and E. K. Hege, “Blind deconvolution in optical diffusion tomography,” Opt. Express, 10 46 –53 (2002). 1094-4087 Google Scholar

36. 

C. H. Schmitz, M. Stewart, M. Farber, H. L. Graber, R. D. Levina, M. B. Levin, Y. Pei, Y. Xu, and R. L. Barbour, “Dynamic studies of small animals with a four-color DOT imager,” Rev. Sci. Instrum., 76 0904302 (2005). 0034-6748 Google Scholar

37. 

K. G. Oweiss and D. J. Anderson, “Noise reduction in multichannel neural recordings using a new array wavelet denoising algorithm,” Neurocomputing, 38–40 1687 –1693 (2001). 0925-2312 Google Scholar

38. 

J. Ripoll, M. Nieto-Vesperinas, and R. Carminati, “Spatial resolution of diffuse photon density waves,” J. Opt. Soc. Am. A, 16 1466 –1476 (1999). 0740-3232 Google Scholar

39. 

C. W. Myers, M. A. Cohen, D. L. Eckberg, and J. A. Taylor, “A model for the genesis of arterial pressure Mayer waves from heart rate and sympathetic activity,” Auton. Neurosci., 91 62 –75 (2001). Google Scholar
©(2005) Society of Photo-Optical Instrumentation Engineers (SPIE)
Yong Xu, Yaling Pei, Harry L. Graber, and Randall Locke Barbour "Image quality improvement via spatial deconvolution in optical tomography: time-series imaging," Journal of Biomedical Optics 10(5), 051701 (1 September 2005). https://doi.org/10.1117/1.2103747
Published: 1 September 2005
Lens.org Logo
CITATIONS
Cited by 11 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Deconvolution

Image quality

Sensors

Image restoration

Denoising

Reconstruction algorithms

Image filtering

Back to Top