|
1.IntroductionFluorescence diffuse optical tomography (fDOT) is a noninvasive imaging technique that enables quantification of tomographic [three-dimensional (3-D)] biodistributions of fluorescent tracers in small animals. It is also called fluorescence molecular tomography in some studies.1–3 The fDOT image reconstruction problem of finding the 3-D biodistribution of a fluorescence marker from acquired data can be mathematically formulated as a linear system of equations where is an weight matrix that relates the measurements and the unknown fluorescence concentration. fDOT reconstruction is an ill-posed large scale problem4,5 that has been solved using a wide variety of reconstruction methods (see a review in Ref. 6).The algebraic reconstruction technique (ART) is an extensively applied and cost-efficient reconstruction method that yields fast and stable reconstruction in experimental diffuse optical tomography (DOT) and fDOT studies.6–9 It solves the linear system of Eq. (1) by projecting a solution estimate onto the hyperplane defined by each row of the linear system. This computationally efficient approach does not require the whole matrix to be held in memory, thus making it practical for managing large datasets.10 Convergence of ART is always fast during the first few iterations, after which it can slow down until it stagnates; in addition, the performance of ART deteriorates with the increase in data noise level and modeling error frequency.11 -regularized approaches are well suited for signal recovery and image reconstruction and have been widely applied in image denoising and magnetic resonance imaging reconstruction. They have also been used in DOT and fDOT.12–14 Douri et al.12 reconstructed the simulated DOT data by combining a strategy based on a priori edge information with a diffusion flux analysis of the local structures at each iteration. Freiberger et al.13 introduced an alternating direction minimization method to solve regularization. This method splits the reconstruction problem for simulated fDOT data into two subproblems: an stage, solved using a Gauss-Newton step,6 and an regularization stage, solved by thresholding (or shrinkage). Correia et al.14 introduced an operator splitting method for solving the inverse problem of image reconstruction in fDOT for simulated, phantom, and ex vivo data with nonlinear anisotropic diffusion and edge priors. Of the studies presented above, only one actually validated the method proposed using experimental data.14 The total variation (TV) functional was first introduced by Rudin, Osher and Fatemi (ROF)15 to address image denoising problems. It is formulated as where is the reconstructed denoised image, is the noisy image, is the -norm, is the -norm, and is an error tolerance included to account for noisy data. TV filters out noise while preserving edges. It has received considerable attention in image processing because it presents advantages over simpler techniques such as linear smoothing, which reduces noise but smooths edges.Iterative procedures have been proposed based on the success of regularization techniques for denoising and image reconstruction. These alternate an iterative method (such as simultaneous ART or the expectation–maximization algorithm) with a denoising step in order to minimize the TV and thus obtain enhanced results in computed tomography and positron emission tomography.16–18 Johnston et al. and Pan et al.16,17 used a standard gradient descent method, whereas Sawatzky18 applied a dual approach to minimize the TV functional. Consequently, the choice of technique for solving regularization-based problems may become crucial, as is nonlinear; therefore, the computational burden can increase significantly using classic gradient-based methods. The recently published Split Bregman (SB) method19 is a simple and efficient algorithm for solving regularization-based problems that makes it possible to split the minimization of and functionals. By applying the SB method to image denoising and compressed sensing in Ref. 19, the authors showed that SB was computationally efficient, given that the SB formulation leads to a problem that can be solved using Gauss–Seidel and Fourier transform methods. SB was recently applied for fluorescence tomography reconstruction.20,21 Abascal et al.20 used SB to solve the optimization problem by imposing a non-negativity constraint. The image was updated using a nonlinear Gauss-Newton step6 based on the computation of first and second derivatives of the nonlinear TV functional. Behrooz et al.21 compared regularization methods and ART with TV reconstruction methods based on ROF and SB. The authors implemented a preconditioned conjugate gradient method6 in each iteration of SB that led to slow convergence in some cases. To validate the method and compare reconstructions, they used a noncontact constant-wave transillumination fluorescence tomography system and concluded that TV regularization has the potential to offer higher resolution and robustness than conventional regularization algorithms and ART. The objective of our work was to present a new approach to solve the fDOT inverse problem, ART-SB. Based on a two-step iterative procedure, we combined the computationally efficient reconstruction method ART with a denoising step. The denoising step is based on the SB formulation and is efficiently implemented using Gauss-Seidel and shrinkage operations without computing first and second derivatives of the TV functional. The ART-SB method has been optimized and studied in depth. Unlike other studies based on shrinkage algorithms, it has been tested with both simulated and experimental fDOT data.12–14 The organization of this article is as follows. Section 2 introduces the fDOT forward problem, briefly presents the well-known ART and SB denoising and describes the proposed ART-SB method. This section also describes data acquisition, experimental setup, and data simulation and finishes by presenting the tools used to compare ART with ART-SB. Section 3 presents the reconstruction and comparative results of simulated and experimental data. Finally, Sec. 4 presents the discussion and conclusions. 2.Methods2.1.fDOT Forward ProblemIn order to model the forward problem, we take into account that, in highly scattering media where light scattering dominates over absorption, light propagation complies with the diffusion equation22 where is the diffusion coefficient for a wavelength at position in a domain , the reduced scattering term, the absorption term, the average intensity, and the source term at position .In fDOT, the excitation intensity at excitation wavelength and emission intensity at emission wavelength are given by a pair of diffusion equations.23–25 The excitation intensity is emitted by an external source at a location , and the emission comes from a fluorescent region characterized by a fluorescence yield , which accounts for its quantum efficiency, its absorption parameter, and its concentration of fluorescence. Assuming that the presence of the fluorophore does not affect the absorption coefficient and that we are working on the steady-state regime, excitation and emission intensities are given by The diffusion equations can be solved using Green’s function for a homogeneous medium and canonical geometries23–25 or using a finite element method for a heterogeneous medium and general geometries.6 Generalizing, we define a Green function that solves the heterogeneous problem Using this function, the photon density solving Eq. (4) is given by In our work, the solutions to Eq. (4) are found with a Garlekin finite element approach using TOAST, the finite elements toolbox for DOT,26,27 which was adapted for fDOT. The normalized data component,28 defined as the quotient between the fluorescence measurement and the excitation measurement for each source-detector pair, is applied to the data, as follows: To compute the matrix of the linear system, , we first differentiate the emission photon density with respect to and discretize the integral as a sum of all finite elements . The variation of emission photon density is given by where is excitation photon density at induced by a source at , and is the adjoint field at given by a source located at the detector position , taking into account the reciprocity of the Green function. The adjoint field solves the equationThe element of the Jacobian matrix relates each measurement (where denotes each source-detector pair) to the concentration of fluorophore at element and can be written as Combining the elements of the matrix , the fDOT linear system can be expressed as Eq. (1). 2.2.Inverse ProblemWe applied two inversion methods: ART and ART-SB. 2.2.1.Reconstruction method: ARTEquation (1) is solved using ART,6–9,11 which modifies the reconstructed image by projecting from one hyperplane to another, defined by each row of Eq. (1), as where is the it’th estimate of the ’th row contribution to the image , the ’th measurement, the ’th row vector of the weight matrix, and the relaxation parameter that adjusts the projection step at each iteration.As for data ordering, the selection of an appropriate access order (such as randomized access order) has been shown to speed up the iterative reconstruction algorithm and generate a better image.8 Thus, we chose randomized ART in our work. 2.2.2.Two-step reconstruction method: ART-SBART-SB is implemented using a two-step iteration:
The SB method19 that solves Eq. (13) is based on splitting the problem into two subproblems that are easier to solve. To this end, our original unconstrained problem of Eq. (13) is transformed into an equivalent constrained problem The constraint condition of Eq. (15) is enforced by applying the Bregman iteration19,29 where the values of given above correspond to the Bregman iteration [] and is the denoising parameter.The and components of this functional can now be split and efficiently solved by SB,19 which iteratively minimizes with respect to and separately, as follows: Note that SB decouples from the portion of the problem, thus making differentiable. In order to solve in a cost-efficient manner, we used the Gauss-Seidel method, as proposed in Ref. 19: Furthermore, since there is no coupling between elements of , we can use shrinkage operators to compute the optimal values of and separately, Further details of shrinkage operators can be found in Refs. 19 and 30. A summary of the ART-SB algorithm can be found in Table 1. Table 1ART-SB algorithm. 2.3.Experimental and Simulated Data2.3.1.Experimental dataA 10-mm thick slab-shaped phantom was built using a resin base with added titanium dioxide and India ink to provide a reduced scattering coefficient of and an absorption coefficient of .31 A 5-mm diameter cylinder hole was drilled and filled with a fluid that matched the optical properties of the resin32 mixed with Alexa fluor 700 1 μM (Invitrogen, Carlsbad, California). The fDOT fluorescence and excitation data were acquired with a noncontact parallel plate fDOT scanner (Fig. 1)33,34 comprising a constant intensity laser diode (675 nm), a beam deflector with two mirrors moved by galvanometers that directs the light emerging from the laser onto the desired points (sources) of the sample, a charge-coupled device camera, a motorized filter wheel, and two matched 10-nm filters to capture the excitation photon wavelength (675 nm) or the fluorescence wavelength (720 nm). All the components are placed inside a light-shielded box as depicted in Fig. 1. The acquisition process was controlled using in-house software. In this work, source positions and detector positions over a surface were used. The weight matrix was calculated as described in Sec. 2.1. 2.3.2.Simulated dataIn addition, an equivalent phantom was simulated using a fine finite element mesh (145,000 nodes). We used the TOAST toolbox26,27 adapted for fDOT (introduced in Sec. 2.1) to simulate excitation and fluorescent photon densities and to construct the weight matrix. Sources were modelled as isotropic point sources (located at a depth below the surface) using Dirichlet boundary conditions. This setting resembles a collimated laser as described in Ref. 27. Measurements were modelled by a Gaussian kernel centered at the detector location and computed as a linear operator acting on the photon density at the boundary of the domain. Thus measured excitation and emission photon densities at the detector position, become and . Afterwards, we calculated the normalized data component and finally solved the linear system matrix as described in Eqs. (8–10) of Sec. 2.1. The average intensity for the weight matrix was reconstructed on a coarser finite element mesh (55,000 nodes) and mapped into a uniform mesh of . The number of sources, number of detectors, and the surface covered by them were equal to those used with the experimental data. The simulation was perturbed with different levels of additive Gaussian noise (1, 3, 5 and 10%). The target, , corresponding to the physical slab geometry phantom with a cylindrical region filled with fluorophore was modelled using the same finite element mesh used for the simulated data and subsequently mapped into a uniform mesh of . 2.4.Comparison of Methods: ART Versus ART-SBA preliminary study was made of the effect of choice of ART-SB algorithm parameters. Both the acquired and simulated data were reconstructed for a range of relaxation parameters, as follows: . Then, for each relaxation parameter value, we reconstructed the dataset using a range of weighting parameters, as follows: . The stop criterion for ART and ART-SB was a change lower than 0.1% from the previous iteration in the relative solution error norm. Once the effect of these parameters was assessed, reconstructions of ART and ART-SB were compared. 2.4.1.Simulated dataART and ART-SB were compared in terms of convergence, signal-to-noise ratio (SNR), and image profiles. Convergence was assessed by visualizing the relative solution error norm compared with the number of iterations. The relative solution error norm (with respect to the target) was calculated as where is the target solution modelled as explained in Sec. 2.3.2.Horizontal profiles were drawn at the center of the image. In order to compare ART with ART-SB profiles in terms of contrast, both profiles were normalized by the average of highest voxel values in the corresponding reconstructions within a region of interest around the fluorescent target. SNR was calculated as 3.ResultsThe -slices of ART and ART-SB reconstructions of simulated and experimental data are shown in Figs. 2 and 3. 3.1.ART ReconstructionART-based zero-noise reconstructions of simulated data within a range of relaxation parameters [] converged approximately to the same relative solution error norm ( maximum difference) with a different number of iterations (from 22 to 435 iterations), thus demonstrating that the ART is robust in terms of solution error norm for diverse relaxation parameters. ART, which has a low relaxation parameter, approximates a weighted least square solution leading to over-smoothed images. On the contrary, high relaxation parameters lead to high-resolution images with noise and artifacts. 3.2.Two-Step Reconstruction (ART-SB)Figure 4 shows the minimum solution error norm achieved with ART-SB reconstructions of simulated data for different weighting parameters, ( and ). The relative solution error norm achieved by ART for is represented by a horizontal dashed line. Selection of the parameters for the ART-SB method was based on the value of from Ref. 19, who found that this value resulted in good convergence. This relationship between and also provided good results in our setting. Once was fixed, it was found that there was a value of above which the relative solution error norm stagnates ( in Fig. 4). After splitting the problem in Eq. (17), we can see that the choice of affects the and subproblems, whereas the choice of affects how much the image is regularized ( subproblem). Besides, in the subproblem from Eq. (17), the solution is equal to after shrinking its vector magnitude by [Eq. (19)]; this effect is more dramatic when is small. Thus, once is set, lower values of lead to smooth reconstructions. We performed this study for each acquisition of data and each value in order to choose optimum values. This increases the robustness of the method with regards to selection of the regularization parameter. 3.3.Comparison of Methods: ART Versus ART-SBGiven the nature of the relaxation parameters of ART, (see Sec. 3.1), we compared ART with ART-SB using two high relaxation parameter values: ( for simulated data) and ( for simulated data). Thus, ART was used to fit the data while SB filtered the noise in the reconstructed image. The faster convergence of ART-SB compared with ART can be observed in a plot of the relative solution error norm against iteration number for simulated data (Fig. 5). Note that the mean CPU time for performing the SB denoising for each ART iteration (Intel®-Core™ 2 Quad CPU, 2.40 GHz, 4 GB de RAM, Windows Vista) was 0.021 s. Therefore, taking into account that we need around 120 to 160 iterations for the examples in Fig. 5, less than 4 s is necessary for SB denoising. In terms of SNR, ART-SB led to a higher SNR than ART with both simulated and real data (Fig. 6). The Y-profiles of ART and ART-SB reconstructions for simulated and experimental data are shown in Fig. 7. The profiles obtained for a simulated case with ART-SB (left side of Fig. 7) are closer to the real ones than those obtained with ART. Moreover, for the experimental case (right side of Fig. 7), the peak-to-valley relation doubled (ART-SB: 19.326. ART: 9.0427). 4.Discussion and ConclusionsIn this article, we propose a novel iterative algorithm, ART-SB, which alternates a cost-efficient reconstruction method (ART) with a denoising step based on the minimization of TV using SB which is also solved in a cost-efficient way. Although ART-SB is a state-of-the-art “shrinkage methodology,”12–14,16–18 it provides a solution for -regularized problems, minimizing TV by means of the SB method introduced by Ref. 19. In contrast to Refs. 20 and 21, we used the SB-denoising formulation, which is solved efficiently, without computing first and second derivatives of the TV functional. SB denoising solved using Gauss-Seidel and shrinkage, as was the case in this article, has a relatively small memory footprint compared with second-order methods that require explicit representations of the Hessian matrix.19 The authors in Ref. 19 also showed that this way of solving SB denoising improves the speed of convergence compared with a gradient descent algorithm based on dual formulation of the ROF functional. Thus, ART-SB is a practical method for solving large dataset problems because ART does not need to hold the system matrix in memory and our implementation of SB denoising does not require explicit representations of the Hessian matrix. ART-SB and ART were compared in terms of convergence, SNR, and quality of image profiles for simulated data, and in terms of SNR and image profiles for experimental data. The comparison revealed that ART-SB enhanced the quality of reconstructions with lower noise and faster convergence than ART. Convergence of ART (Fig. 5) is fast during the first few iterations, after which it stagnates, in agreement with Ref. 11. ART-SB significantly improved localization and sharpened edges. ART, on the other hand, led to blurred reconstructions accompanied by a loss of resolution along the (Figs. 2 and 3). Furthermore, ART deteriorates with increased noise level.11 However, we found that ART-SB was very robust in terms of data noise level in -slices (Figs. 2 and 3), SNR (Fig. 6) and Y-profiles (Fig. 7). These findings agree with the conclusions of the two-step reconstruction algorithms for computed tomography and positron emission tomography cited above.16–18 Although SB denoising for each -projection of each 3-D ART reconstruction iteration provided a significant improvement in terms of localization and image quality, we did not test whether implementation of 3-D SB denoising could lead to even better results. Furthermore, we stress that even if the determination of the optimal denoising parameter () did not seem critical for the study carried out in Sec. 3.2, only has an effect on the shrinkage operator. Thus, since the restriction is evaluated in each iteration, the number of required iterations for convergence depends on the value (further details can be found in Ref. 19). With , the study of the parameter allowed us to understand that there is a value of above which the shrinkage operator is more effective. As we point out in Sec. 3.2, low values of correspond indirectly to the shrinkage operator effect [Eq. (19)], which leads to smooth reconstructions and thus increases the number of necessary iterations before convergence is reached. Consequently, even if the SB iteration converges to the solution for any positive value of , given the nature of Bregman iteration,19 the value of affects the convergence speed of the algorithm. In conclusion, we showed that the combination of a cost-efficient linear iterative technique (ART) with a denoising method (anisotropic SB) significantly improves the reconstruction of fDOT-simulated data in terms of relative solution error norm and image quality and reconstruction of fDOT experimental data in terms of image quality. In addition, ART-SB appears to be an efficient methodology for handling the large datasets required for fDOT experiments and could be of interest to researchers working with optical tomography. AcknowledgmentsThis work was supported by Spanish Ministry of Science and Innovation (FPI program, TEC2008-06715), EU-FP7 project FMTXCT-201792, the Community of Madrid (ARTEMIS S2009/DPI-1802), European Regional Development Funds (FEDER), and CDTI under the CENIT program (AMIT Project, CEN-20101014). ReferencesA. Martínet al.,
“Imaging changes in lymphoid organs in vivo after brain ischemia with three-dimensional fluorescence molecular tomography in transgenic mice expressing green fluorescent protein in T lymphocytes,”
Mol. Imag., 7
(4), 157
–167
(2008). MIOMBP 1535-3508 Google Scholar
V. Ntziachristoset al.,
“In vivo tomographic imaging of near-infrared fluorescent probes,”
Mol. Imaging, 1
(2), 82
–88
(2002). http://dx.doi.org/10.1162/153535002320162732 MIOMBP 1535-3508 Google Scholar
V. Ntziachristoset al.,
“Looking and listening to light: the evolution of whole-body photonic imaging,”
Nat. Biotech., 23
(3), 313
–320
(2005). http://dx.doi.org/10.1038/nbt1074 NABIF9 1087-0156 Google Scholar
S. Arridge,
“Optical tomography in medical imaging,”
Inverse Probl., 15 R41
–R93
(1999). http://dx.doi.org/10.1088/0266-5611/15/2/022 INPEEY 0266-5611 Google Scholar
J. Hadamard,
“Sur les problèmes aus dérivées partielles et leur signification physique,”
Princeton Univ. Bull., 13 49
–52
(1902). Google Scholar
S. ArridgeJ. Scothland,
“Optical tomography: forward and inverse problems,”
Inverse Probl., 25
(12), 123010
(2009). http://dx.doi.org/10.1088/0266-5611/25/12/123010 INPEEY 0266-5611 Google Scholar
A. D. Zacharopouloset al.,
“A matrix-free algorithm for multiple wavelength fluorescence tomography,”
Opt. Express, 17
(21), 3042
–3051
(2009). http://dx.doi.org/10.1364/OE.17.003042 OPEXFF 1094-4087 Google Scholar
X. Inteset al.,
“Projection access order in algebraic reconstruction technique for diffuse optical tomography,”
Phys. Med. Biol., 47
(1), N1
–N10
(2002). http://dx.doi.org/10.1088/0031-9155/47/1/401 PHMBA7 0031-9155 Google Scholar
V. Ntziachristoset al.,
“Concurrent MRI and diffuse optical tomography of breast after indocyanine green enhancement,”
Proc. Natl. Acad. Sci. U. S. A., 97
(6), 2767
–2772
(2000). Google Scholar
H. JiangY. XuN. Iftimia,
“Experimental three-dimensional optical image reconstruction of heterogeneous turbid media from continuous-wave data,”
Opt. Express, 7
(4), 204
–209
(2000). http://dx.doi.org/10.1364/OE.7.000204 OPEXFF 1094-4087 Google Scholar
P. C. Hansen, Discrete Inverse Problems: Insights and Algorithms, SIAM, Philadelphia
(2010). Google Scholar
A. Douiriet al.,
“Anisotropic diffusion regularization methods for diffuse optical tomography using edge prior information,”
Meas. Sci. Technol., 18
(1), 87
–95
(2007). http://dx.doi.org/10.1088/0957-0233/18/1/011 MSTCEP 0957-0233 Google Scholar
M. FreibergerC. ClasonH. Scharfetter,
“Total variation regularization for nonlinear fluorescence tomography with an augmented Lagrangian splitting approach,”
Appl. Opt., 49
(19), 3741
–3747
(2010). http://dx.doi.org/10.1364/AO.49.003741 APOPAI 0003-6935 Google Scholar
T. Correiaet al.,
“Split operator method for fluorescence diffuse optical tomography using anisotropic diffusion regularization with prior anatomical information,”
Biomed. Opt. Express, 2
(9), 2632
–2648
(2011). http://dx.doi.org/10.1364/BOE.2.002632 BOEICL 2156-7085 Google Scholar
L. I. RudinS. OsherE. Fatemi,
“Nonlinear total variation based noise removal algorithms,”
Phys. D, 60
(1), 259
–268
(1992). http://dx.doi.org/10.1016/0167-2789(92)90242-F PDNPDT 0167-2789 Google Scholar
M. JohnstonG. A. JohnsonC. T. Badea,
“GPU-based iterative reconstruction with total variation minimization for micro-CT,”
Proc. SPIE, 7622 762238
(2010). http://dx.doi.org/10.1117/12.844368 PSISDG 0277-786X Google Scholar
Y. Panet al.,
“TV-regularized iterative image reconstruction on a mobile C-ARM CT,”
Proc. SPIE, 7622 76222L
(2010). http://dx.doi.org/10.1117/12.844398 PSISDG 0277-786X Google Scholar
A. Sawatzkyet al.,
“Accurate EM-TV algorithm in PET with low SNR,”
in IEEE Conf. Nuclear Science Symp. Conf. Record,
5133
–5137
(2008). Google Scholar
T. GoldsteinS. Osher,
“The split Bregman method for L1 regularized problems,”
SIAM J. Imag. Sci., 2
(2), 323
–343
(2009). http://dx.doi.org/10.1137/080725891 1936-4954 Google Scholar
J. F. P.-J. Abascalet al.,
“Fluorescence diffuse optical tomography using the split Bregman method,”
Med. Phys., 38 6275
–6284
(2011). http://dx.doi.org/10.1118/1.3656063 MPHYA6 0094-2405 Google Scholar
A. Behroozet al.,
“Total variation regularization for 3D reconstruction in fluorescence tomography: experimental phantom studies,”
Appl. Opt., 51
(34), 8216
–8227
(2012). http://dx.doi.org/10.1364/AO.51.008216 APOPAI 0003-6935 Google Scholar
A. Ishimaru, Wave Propagation and Scattering in Random Media, Academic Press, New York
(1978). Google Scholar
E. E. Graveset al.,
“A submillimeter resolution fluorescence molecular imaging system for small animal imaging,”
Med. Phys., 30
(5), 901
–912
(2003). http://dx.doi.org/10.1118/1.1568977 MPHYA6 0094-2405 Google Scholar
D. Hydeet al.,
“FMT-CT imaging of amyloid-beta plaques in a murine Alzheimer’s disease model,”
Neuroimage, 44
(4), 1304
–1311
(2009). http://dx.doi.org/10.1016/j.neuroimage.2008.10.038 NEIMEF 1053-8119 Google Scholar
V. NtziachristosR. Weissleder,
“Experimental three-dimensional fluorescence reconstruction of diffuse media by use of a normalized Born approximation,”
Opt. Lett., 26
(12), 893
–895
(2001). http://dx.doi.org/10.1364/OL.26.000893 OPLEDP 0146-9592 Google Scholar
M. Schweiger, Application of the Finite Element Method in Infrared Image Reconstruction of Scattering Media, University College London, United Kingdom
(1994). Google Scholar
M. Schweigeret al.,
“The finite element method for the propagation of light in scattering media: boundary and source conditions,”
Med. Phys., 22
(11), 1779
–1792
(1995). http://dx.doi.org/10.1118/1.597634 MPHYA6 0094-2405 Google Scholar
M. BornE. Wolf, Principle of Optics, Cambridge University Press, Cambridge, United Kingdom
(1999). Google Scholar
W. Yinet al.,
“Bregman iterative algorithms for l1-minimization with applications to compressed sensing,”
SIAM J. Imag. Sci., 1
(1), 143
–168
(2008). http://dx.doi.org/10.1137/070703983 1936-4954 Google Scholar
S. Setzer,
“Split Bregman algorithm, Douglas-Rachford splitting and frame shrinkage,”
in Second International Conference on Scale Space and Variational Methods in Computed Vision,
464
–476
(2009). Google Scholar
D. A. Boas, Diffusion Photon Probes of Structural and Dynamical Properties of Turbid Media: Theory and Biomedical Applications, University of Pennsylvania, Pennsylvania
(1996). Google Scholar
R. Cubedduet al.,
“A solid tissue phantom for photon migration studies,”
Phys. Med. Biol., 42
(10), 1971
–1979
(1997). http://dx.doi.org/10.1088/0031-9155/42/10/011 PHMBA7 0031-9155 Google Scholar
J. Chamorro-Serventet al.,
“Feasibility of U-curve method to select the regularization parameter for fluorescence diffuse optical tomography in phantom and small animal studies,”
Opt. Express, 19
(12), 11490
–11506
(2011). http://dx.doi.org/10.1364/OE.19.011490 OPEXFF 1094-4087 Google Scholar
J. Chamorro-Serventet al.,
“FDOT setting optimization and reconstruction using singular value analysis with automatic thresholding,”
in IEEE Conf. Nuclear Science Symp. Medical Imaging Conf.,
2827
–2829
(2009). Google Scholar
|