Open Access
6 October 2015 Nonlinear approach to difference imaging in diffuse optical tomography
Meghdoot Mozumder, Tanja Tarvainen, Aku Seppänen, Ilkka Nissilä, Simon R. Arridge, Ville Kolehmainen
Author Affiliations +
Abstract
Difference imaging aims at recovery of the change in the optical properties of a body based on measurements before and after the change. Conventionally, the image reconstruction is based on using difference of the measurements and a linear approximation of the observation model. One of the main benefits of the linearized difference reconstruction is that the approach has a good tolerance to modeling errors, which cancel out partially in the subtraction of the measurements. However, a drawback of the approach is that the difference images are usually only qualitative in nature and their spatial resolution can be weak because they rely on the global linearization of the nonlinear observation model. To overcome the limitations of the linear approach, we investigate a nonlinear approach for difference imaging where the images of the optical parameters before and after the change are reconstructed simultaneously based on the two datasets. We tested the feasibility of the method with simulations and experimental data from a phantom and studied how the approach tolerates modeling errors like domain truncation, optode coupling errors, and domain shape errors.

1.

Introduction

Diffuse optical tomography (DOT) is a noninvasive technique for imaging biological tissues with applications in imaging of human brain13 and breast4,5 and small animals.6,7 In brain imaging, DOT has been used for functional brain activation studies,1,3,810 imaging hemorrhages at birth,2 and stroke.7,11 DOT is portable, which makes it a potential imaging method for bed-side monitoring of newborn infants or adults in intensive care.8 Recently, it has also been suggested that using high-density imaging arrays DOT could achieve spatial resolution comparable to functional magnetic resonance imaging.12,13 DOT has also applications in cancer tumor diagnosis in humans and small animals.46

Absolute imaging in DOT uses a single set of measurements to reconstruct spatially distributed absorption and scattering coefficients. However, absolute imaging is very sensitive to modeling errors, which can be caused, e.g., by inaccurately known object shape or unknown optode coupling coefficients or optode positions. A variety of techniques have been developed that could provide tolerance toward such modeling errors. Some of the techniques rely on explicit calculation of coupling coefficients from experiments14,15 or using computational methods.16,17 For estimating the domain shape, several registration methods are available that typically use a finite set of measured surface points to fit a generic head model, in the form of an anatomical atlas.18 However, interpolating the object shape using a few measured points does not guarantee obtaining the exact surface of the patient; hence, the process might still retain modeling errors. Also, techniques using data from other imaging modalities such as computed tomography (CT)19 or magnetic resonance imaging (MRI)20,21 to obtain the domain shape and optode positions have been proposed. Such data might not always be available. The Bayesian approximation error approach22 is an alternative computational technique where the statistics of such model-based errors are precomputed using prior probability distributions of the unknowns and the uncertain nuisance parameters. These statistics are then used in the image reconstruction process to compensate for the modeling errors.2328

Nevertheless, the most popular technique for in vivo DOT imaging has been difference imaging where the objective is to reconstruct change in the optical properties using measurements before and after the change. Conventionally, the image reconstruction is carried out using the difference of the measurements and a linearized approximation of the observation model.3,810,14,2932 In the case of imaging brain activation, the reference measurement from the state before the change is typically obtained at the “rest stage” of a brain.8,9 In some cases, the reference measurement can also be obtained from a homogeneous tissue mimicking phantom.32 In Refs. 2930.31, 33, the difference imaging problem is stated as reconstructing moment-by-moment relative differences of the parameters using mean of the time series of measured data as the reference data. One of the main benefits of the linearized difference imaging approach is that it has a good tolerance to (invariant) modeling errors, such as inaccurately known source and detector locations and coupling as well as inaccurately known body shape. When reconstructing images using differences in data resulting from a change in the optical properties, several modeling errors, which are invariant between the measurements, cancel out (partially) in the subtraction of the measurement before the change from the measurement after the change.14 The performance of linear difference imaging in the presence of mismodeled background was studied in Refs. 25, 31 and in imaging objects of different sizes and target optical properties in Ref. 29. The method was extended for extracting dynamic information from time series data in Refs. 33 and 34. A nonlinear (postprocessing) update method based on spatial deconvolution of the difference images was implemented in Refs. 3435.36.37.38.39 to improve on the linearized solutions. A drawback of the linear reconstruction approach is that the difference images are usually only qualitative in nature and their spatial resolution can be weak because they rely on the global linearization of the nonlinear observation model. The performance of the linear reconstruction also depends on the linearization point, which ideally should be equivalent to the initial state, which in practice is always unknown. According to simulations in Ref. 40, inaccurately known background optical properties together with linear difference imaging can lead to inaccurate contrast in the reconstructed images, and in some cases it is possible that the method even fails to detect and localize the changes.

To overcome the limitations of the linear reconstruction approach, we apply a new nonlinear approach for difference imaging in DOT that has been initially developed for estimating conductivity distribution in electrical impedance tomography in Refs. 41 and 42. In this approach, the optical parameters after the change are parameterized as a linear combination of the initial state and the change. The approach is based on the regularized nonlinear least-squares approach. Instead of using the difference of the data before and after the change, the measurement before and after the change is concatenated into a single measurement vector, and the objective is to estimate the unknown initial state and the change based on the data. This approach allows naturally for modeling independently the spatial characteristics of the background optical parameters and the change of the optical parameters by separate regularization functionals. The approach also allows the restriction of the optical parameter changes to a region of interest (ROI) inside the domain in cases where the change is known to occur in a certain subvolume of the body. We test the feasibility of the method with two-dimensional (2-D) simulations using frequency domain data from a simplified head geometry and experimental frequency domain data from a cylindrical phantom using three-dimensional (3-D) models.

Since good tolerance for modeling errors such as domain truncation, inaccurately known optode coupling coefficients, and inaccurately known domain shape are some of the main benefits of linear difference imaging, we also study how the new nonlinear approach tolerates the same modeling errors.

The remainder of the paper is organized as follows. In Sec. 2, a brief review of the light transport modeling in DOT is given. Next, the absolute and difference imaging approaches in DOT and the reconstruction algorithms are explained. In Sec. 3, we describe the methods used in data simulation and constructing the prior models. Then, we present the results in a 2-D geometry with and without modeling errors. Next, we describe our experimental setup and present the reconstruction results with the experimental data. Finally, the conclusions are given in Sec. 4.

2.

Forward Model of Diffuse Optical Tomography

Let ΩRn, n=2,3, denote the object domain and Ω the domain boundary. In a diffusive medium, the commonly used light transport model for DOT is the diffusion approximation to the radiative transport equation. 43 In this paper, the frequency domain version of the diffusion approximation is used,

Eq. (1)

(·κ(r)+μa(r)+jωc)Φ(r)=0,rΩ,

Eq. (2)

Φ(r)+12γκ(r)αΦ(r)k^={Qpγrsp0rΩ\sp,
where Φ(r)=Φ is the fluence, μa(r)=μa is the absorption coefficient, and κ(r)=κ is the diffusion coefficient. The diffusion coefficient κ is given by κ(r)=1/{n[μa(r)+μs(r)]}, where μs(r)=μs is the (reduced) scattering coefficient. Furthermore, j is the imaginary unit, ω is the angular modulation frequency of the input signal, and c is the speed of light in the medium. The parameter Qp is the strength of the light source at locations sp, p=1,,NsΩ, operating at angular frequency ω. The parameter γ is a dimension-dependent constant (γ=1/π when ΩR2, γ=1/4 when ΩR3) and α is a parameter governing the internal reflection at the boundary Ω. The measurable quantity exitance Γ(r) by detector q under illumination from source p is given by

Eq. (3)

Γpq(r)=κΦp(r)k^=2γαΦp(r)rdq,
where k^ is the outward normal to the boundary at point r and dq, q=1,,NdΩ are the detector locations.

The numerical approximation of the forward model (1)–(3) is based on a finite-element (FE) approximation. In the FE approximation, the domain Ω is divided into Ne nonoverlapping elements joined at Nn vertex nodes. The photon density in the finite dimensional basis is given by

Eq. (4)

Φh=k=1Nnφkψk(r),
where ψk is the nodal basis functions of the FE mesh and φk is the photon densities in the nodes of the FE mesh. Furthermore, we write finite dimensional (piecewise constant) approximations for μa(r) and μs(r)

Eq. (5)

μa(r)=l=1Nμa,lχl(r),

Eq. (6)

μs(r)=l=1Nμs,lχl(r),
where χl denotes the characteristic functions of disjoint image pixels.

The measurement data for frequency domain DOT typically contains the measured log amplitude and phase for all source-detector pairs

Eq. (7)

y=(Relog(Γ)Imlog(Γ))R2NsNd,
where y is the data vector. The FE-based solution of Eqs. (1)–(3) is denoted by A(x), where
x=(μa,μs)TR2N
is the discretized optical coefficients. The observation model is written as

Eq. (8)

y=A(x)+e,
where eR2NsNd models the random noise in measurements.

2.1.

Absolute Imaging

In absolute imaging, the optical coefficients x are reconstructed using a single set of measurements y during which the target is assumed to be nonvarying.

Assuming that the additive measurement noise is independent of the unknowns and distributed as zero-mean Gaussian eN(0,Γe),44 the estimate amounts to the minimization problem

Eq. (9)

x^=argminx>0{Le[yA(x)]2+f(x)},
where the LeTLe=Γe1 is the Cholesky factor and f(x) is the regularization functional, which should be constructed based on the prior information on the unknowns.

Usual choices for the regularization functional include standard Tikhonov regularization f(x)=αx2, smoothness regularization f(x)=αLxx2, where Lx is a (possibly spatially and directionally weighted) differential operator,3,45,46 total variation (TV)4750 regularization f(x)=αx1, and so on.

We note that the estimate (9) can be interpreted in the Bayesian inversion framework as the maximum a posteriori estimate from a posterior density model, which is based on the observation model (8) and a prior model for the unknowns.22,23,51

2.2.

Difference Imaging

Consider two DOT measurement realizations y1 and y2 obtained from the body at times t1 and t2 with optical coefficients x1 and x2, respectively. The observation models corresponding to the two DOT measurement realizations can be written as in Eq. (8)

Eq. (10)

y1=A(x1)+e1,

Eq. (11)

y2=A(x2)+e2,
where eiN(0,Γei), i=1,2. The aim in difference imaging is to reconstruct the change in optical parameters δx=x2x1 based on the measurements y1 and y2.

2.2.1.

Conventional linear approach to difference imaging

Conventionally, the image reconstruction in difference imaging is carried out as follows. Equations (10) and (11) are approximated by the first-order Taylor approximations as

Eq. (12)

yiA(x0)+J(xix0)+ei,i=1,2,
where x0 is the linearization point and J=A/x(x0) is the Jacobian matrix evaluated at x0. Using the linearization and subtracting y1 from y2 gives the linear observation model

Eq. (13)

δyJδx+δe,
where δx=x2x1 and δe=e2e1.

Given the model (13), the change in optical coefficients δx can be estimated as

Eq. (14)

δx^=argminδx{Lδe(δyJδx)2+fδx(δx)},
where fδx(δx) is the regularization functional. The weighting matrix Lδe is defined as LδeTLδe=Γδe1, where Γδe=Γe1+Γe2.

The regularization functional fδx(δx) is often chosen to be of the quadratic form fδx(δx)=Lδxδx2, where Lδx is the regularization matrix. In such a case, the problem (14) is linear and has a closed form solution13,52

Eq. (15)

δx^=(JTΓδe1J+Γδx1)1(JTΓδe1δy).

In this paper, we refer to Eq. (15) as the conventional difference imaging estimate.

The main benefit of the difference imaging is that at least part of the modeling errors cancel out when considering the difference data δy. Hence, the estimates are often to some extent tolerant of modeling errors. A drawback of the approach is that the difference images are usually only qualitative in nature and their spatial resolution can be weak because they rely on the global linearization of the nonlinear observation model (8). Moreover, the estimates depend on the selection of the linearization point x0. Typically, x0 is selected as a homogeneous (spatially constant) estimate of the initial state x1. This choice can lead to errors in the reconstructions if the initial optical coefficients are not accurately known.25,31

2.2.2.

Nonlinear approach to difference imaging

In this section, we formulate the reconstruction of the change of optical coefficients in the nonlinear regularized least squares framework. However, instead of reconstructing both states x1 and x2 using Eq. (9) separately for datasets yi, i=1,2 and then subtracting δx=x2x1, we use an approach where δx is reconstructed together with x1 by using both datasets y1 and y2 simultaneously.41,42 This approach allows us to model prior information in cases where, e.g., the spatial characteristics of the initial state x1 and change δx are different (e.g., smooth x1 and sparse δx). This approach also allows in a straightforward way the restriction of the change δx into a ROI.

Let us assume that the change in optical coefficients δx=x2x1 is known to occur in ΩROIΩ, and denote the change of optical coefficients within ΩROI by δxROI. Then, δx=MδxROI, where M is an extension mapping M:ΩROIΩ such that

Eq. (16)

MδxROI(r)={δxROI(r),rΩROI0rΩ\ΩROI.

Obviously, if no ROI constraint is used, we set ΩROI=Ω. The optical coefficients after the change x2 can now be represented as a linear combination of the initial state and the change as

Eq. (17)

x2=x1+MδxROI.

Inserting Eq. (17) into Eq. (11) and concatenating the measurement vectors y1 and y2 and the corresponding models in Eqs. (10) and (11) into block vectors, leads to an observation model41

Eq. (18)

[y1y2]y˜=[A(x1)A(x1+MδxROI)]A˜(x˜)+[e1e2]e˜,

Eq. (19)

y˜=A˜(x˜)+e˜,
where

Eq. (20)

x˜=[x1δxROI].

Given the model in Eq. (19), the initial state x1 and the change δx can be simultaneously estimated as

Eq. (21)

x˜^=argminx1>0{Le˜[y˜A˜(x˜)]2+f(x˜)}.

Here, Le˜R4NsNd×4NsNd is the Cholesky factor such that Le˜TLe˜=Γe˜1, where

Γe˜=[Γe102NsNd×2NsNd02NsNd×2NsNdΓe2],
and 02NsNd×2NsNdR2NsNd×2NsNd is an all-zero matrix. f(x˜) is the joint regularization functional of x˜=(x1,δx)T, which allows for separate models for x1 and δx as

Eq. (22)

f(x˜)=f1(x1)+f2(δx).

The estimate in Eq. (21) can be computed iteratively using for example a Gauss–Newton (GN) algorithm53 as

Eq. (23)

x˜^i+1=x˜^i+s^i[J˜iTΓe˜1J˜i+(x˜^i)]1[J˜iTΓe˜1{y˜A˜(x˜^i)]G(x˜^i)},
where s^i is the step length, and G(x˜^i) and (x˜^i) are the gradient and Hessian of the regularization functional f(x˜), respectively. The Jacobian matrix J˜i=A˜/x˜ is of the form
J˜i(x˜^)=[J(x^1,i)02NsNd×NROIJ(x^1,i+MδxROI,i^)J(x^1,i+MδxROI,i^)M],
where J(x) is the Jacobian matrix for the forward mapping A(x), 02NsNd×NROIR2NsNd×NROI is an all-zero matrix, and NROI is the dimension of the vector δxROI.

3.

Results

The feasibility of the method was tested with 2-D simulations and with 3-D experimental measurement data.

3.1.

Estimates

The following reconstruction approaches were considered:

Nonlinear: nonlinear reconstruction of x˜=(x1,δxROI)T with the proposed (nonlinear) difference imaging method

Eq. (24)

x˜^=argminx1>0{Le˜[y˜A˜(x˜)]2+f1(x1)+f2(δxROI)}.

Linear: conventional (linear) difference reconstruction by solving

Eq. (25)

δx^=argminδx{Lδe(δyJδx)2+f(δx)}.

3.2.

Regularization Functionals

For modeling x1 in the nonlinear difference imaging approach Eq. (24), we used a quadratic Gaussian smoothness regularization of the form

Eq. (26)

f1(x1)=Lx1(x1x1,*)2.

In the construction of the Gaussian smoothness regularization functional Eq. (26), the absorption and scatter coefficients μa,1 and μs,1 were modeled as mutually independent Gaussian random fields. In the construction of prior covariances, the random field x1 was considered in the form

x1=x1,in+x1,bg,
where x1,in is a spatially inhomogeneous parameter with zero mean, x1,inN(0,Γin,x1), and x1,bg is a spatially constant (background) parameter with nonzero mean. For the latter, we can write xbg=I, where I is a vector of ones and q is a scalar random variable with Gaussian distribution q N(c,σbg,x12). In the construction of Γin,f, the approximate correlation length was adjusted to match the expected size of the inhomogeneities and the marginal variances of xin were tuned based on the expected variation of the optical properties in the initial state. See Refs. 22, 23, and 26 for further details. Modeling x1,in and x1,bg as mutually independent, we obtain
x1,*=cI,Γx1=Γin,x1+σbg,x12IIT,
where the Cholesky factor Lx1TLx1=Γx11 in Eq. (26). This particular Gaussian smoothness regularization has been previously used in absolute imaging in Refs. 2324.25, 27, 28, 48 and in difference imaging in Ref. 54. In this paper, the correlation length was chosen as 8 mm. The standard deviations of the background and inhomogeneities are given in Table 1.

Table 1

Parameters of Gaussian smoothness regularization. The first two rows show standard deviations of the background σbg,δx and the inhomogeneities σin,δx chosen for modeling δx. The next two rows show σbg,x1 and σin,x1 chosen for modeling the initial state x1. Here, x1,* is selected as a homogeneous (spatially constant) estimate of the initial state x1.

Regularization parameters(2-D and 3-D)
σbg,δx0.1×x1,*
σin,δx0.2×x1,*
σbg,x10.4×x1,*
σin,x10.8×x1,*

For modeling δxROI (in all the following test cases except for case 2 in Sec. 3.3.2), we used a sparsity promoting TV functional f2(δxROI)=αTV(δxROI), where

Eq. (27)

TV(δxROI)=k=1N|Ωk|(δxROI)|Ωk2+β,
is a differentiable approximation of the isotropic TV functional55 and (δxROI)|Ωk is the gradient of δxROI at element Ωk. β>0 is a small parameter that ensures TV (δxROI) is differentiable and α is the regularization parameter. In this paper, α and β were manually selected. For systematic approaches of the regularization parameter selection see, e.g., Refs. 56 and 57. The values of α and β used in our reconstructions are listed in Table 2.

Table 2

Regularization parameters of the TV regularization for δx. The first row shows α and β values of μa and μs′ used in the 2-D reconstructions. The second row shows the regularization parameters for the 3-D reconstructions.

αδμaαδμsβδμaβδμs
2-D30,000305×1055×104
3-D5000.52×1061×104

For modeling of δx in the conventional linear difference imaging Eq. (25), we used the same quadratic Gaussian smoothness regularization as before,

Eq. (28)

f(δx)=Lδx(δx)2,
except in this case, while modeling the random field δx, the spatially constant part x1,bg had a zero mean x1,bg=qI with q N(0,σbg,x12). The standard deviations of the background and inhomogeneities are given in Table 1.

Note that the standard deviations chosen for x1 are larger than in δx since x1 are absolute values of optical parameters unlike δx. Also, since x1 is the “unchanging” parameter between observations y1 and y2 in the model (18), the presence of the same modeling errors in both observations y1 and y2 should mainly affect the estimate x1 (not δx). Thus, to allow for large variations in x1 in the presence of modeling errors, we choose larger std’s for prior modeling of x1.

3.3.

Two-Dimensional Target and Simulations

The measurements y1 and y2 were simulated using 2-D simulation target states x1 and x2 shown in Fig. 1. The shape of the domain was extracted from a segmented adult brain CT scan. The diameter along the saggital plane was scaled to 100 mm (approximately the size of a newborn baby head). The state x1 had three outer layers (mimicking the skin, skull, and cerebrospinal fluid) and two overlapping circular inclusions in the brain area. State x2 was the same as x1, except that one additional inclusion in μa and one in μs were added in the dorsal (back) part of the brain. The optical properties of the states x1 and x2 are listed in Table 3. The measurement setup consisted of 16 sources and 16 detectors modeled as 1-mm-wide surface patches located on the boundary Ω. Random measurement noise ei,i=1,2, drawn from zero-mean Gaussian distributions π(ei)=N(0,Γei), i=1,2 where the standard deviations were specified as 1% of the simulated noise free measurement data, were added to the simulated measurement data. The means ei,*=0, i=1,2 and covariances Γei, i=1,2 were assumed known.

Fig. 1

Two-dimensional target optical properties: (a) absorption and (b) scattering. First column: reference state x1, second column: state x2, third column: change δx.

JBO_20_10_105001_f001.png

Table 3

Optical properties of the target head.

μa(mm−1)μs′(mm−1)
Layer 10.011
Layer 20.0151.19
Layer 30.0040.6
Inclusion 10.0040.4
Inclusion 20.0151.5
Inclusion 3 (only in x2)0.0181
Inclusion 4 (only in x2)0.011.8
Background0.011

3.3.1.

Using different optode arrangements on the boundary

We investigated situations where the optodes (16 sources, 16 detectors) were arranged at (a) equiangular intervals around the boundary of the 2-D target and at (b) equiangular intervals only in the dorsal part of the boundary of the 2-D target. In this case, we used a quadratic smoothness promoting functional for modeling x1, f1(x1)=Lx1(x1x1,*)2 and sparsity promoting TV functional for modeling δxROI, f2(δxROI)=αTV(δxROI) in the estimate (24). We used a quadratic smoothness promoting functional for modeling δx, f(δx)=Lδxδx2 in the estimate (25).

Figures 2(a) and 2(b) show the estimated optical coefficients from a simulation setup where the optodes were placed at equiangular intervals around the whole boundary. Panel (a) shows estimates of x1 and δxROI using nonlinear difference imaging, Eq. (24). The ROI in the computations was selected as the dorsal hemisphere of the brain domain. The computation time (tCPU) of the nonlinear estimate was tCPU=24.39s. The reconstructions converged after three GN iterations. Panel (b) shows the corresponding conventional linear difference imaging estimate of δx, Eq. (25). The computation time of the linear estimate was tCPU=1.31s. The tCPU:s of all the remaining 2-D nonlinear and linear estimates were of the same magnitude as the values reported here. Figure 2(c) shows estimates x1 and δxROI, Eq. (24) where the optodes were placed at the dorsal (upper) half of the boundary. Figure 2(d) shows the corresponding estimate δx using a linear difference imaging, Eq. (25). The error in the reconstructions

Error(x^)=x^x02x02×100%,
where x^ is estimated δμa or δμs and x0=x2x1 is the simulated (true) target change, are listed in Table 4.

Fig. 2

Difference imaging using different optode arrangements on the boundary. (a) and (b) Estimated optical coefficients with nonlinear and linear difference imaging, respectively, with the optodes placed at equiangular intervals around the boundary. (c) and (d) Estimated optical coefficients with optodes placed at equiangular intervals only at the dorsal part of the boundary. In each row, the first column is μa,1, the second column is δμa, the third column is μs,1, and the fourth column is δμs. In the δx images, the black lines indicate the domain boundaries and the green circles indicate the position of the inclusions. The blank (white) areas in nonlinear estimates of (δμa,δμs) are regions outside region of interest (ROI).

JBO_20_10_105001_f002.png

Table 4

Reconstruction errors.

SectionMethodError(δμa)Error(δμs′)
3.3.1Optodes aroundNonlinear8169
Linear8758
Optodes dorsalNonlinear6257
Linear9271
3.3.2No ROINonlinear7970
Quadratic regularization for δxROINonlinear5852
No ROILinear9271
ROILinear8563
3.3.3Domain truncationNonlinear7367
Linear9857
Incorrect couplingNonlinear7050
Linear9071

As can be seen from Fig. 2 and Table 4, nonlinear difference imaging shows better localization and recovery of the change compared to the conventional linear reconstruction for both optode arrangements. The nonlinear estimates of the change are not as spatially spread as the linear estimates. The effect is especially evident when only the upper part of the boundary is covered by sources and detectors. Since only the partial sensor coverage would be usually available in practical head imaging, the remaining 2-D simulations using the head domain were carried out using sources and detectors only at the dorsal part of the boundary.

3.3.2.

Using different region of interest constraints and regularizations

The purpose of this test case is to demonstrate that the improvement of the proposed approach over the conventional linearized reconstruction is not only because of (1) ROI constraint and (2) TV regularization for δx. The results are shown in Fig. 3. Figure 3(a) shows nonlinear difference reconstruction Eq. (24) without any ROI constraint i.e., ΩROI=Ω. Figure 3(b) shows nonlinear difference reconstruction Eq. (24) using quadratic smoothness promoting functional for modeling δxROI, f(δxROI)=LδxROIδxROI2. Figure 3(c) shows conventional linear difference reconstruction Eq. (25), and Fig. 3(d) shows conventional linear difference reconstruction Eq. (25) with ROI constraint

δxROI^=argminδxROI{Lδe(δyJMδxROI)2+fδxROI(δxROI)}.

Fig. 3

(a) Estimated optical coefficients using nonlinear difference imaging with no ROI constraint. (b) Estimated optical coefficients with nonlinear difference imaging using quadratic smoothness regularization for modeling δxROI. (c) Estimated change in optical coefficients using standard difference imaging. (d) Estimates obtained using ROI constant in standard difference imaging.

JBO_20_10_105001_f003.png

We can see that the results with nonlinear difference imaging [Figs. 3(a) and 3(b)] are quite similar to those obtained when using ROI constraint and TV regularization [Fig. 2(c)]. Also, the difference imaging estimates do not improve much by adding the ROI constraint. This result shows that the improvement of reconstruction is not only due to the choice of regularization or using the ROI constraint alone—it is in significant part due to the specific parametrization and formulation of the nonlinear difference imaging problem.

3.3.3.

Tolerance toward modeling errors

We tested with simulations the tolerance of the nonlinear and linear difference imaging toward modeling errors. Reconstructions in the presence of (a) domain truncation, (b) unknown optode coupling, and (c) unknown object shape were considered. The reconstructions are shown in Fig. 4. The errors in the reconstructions are listed in Table 4.

Fig. 4

Tolerance toward modeling errors. (a) and (b) Estimates of optical coefficients using the nonlinear difference imaging and the linear difference imaging in the presence of domain truncation. (c) and (d) Estimates in the presence of coupling errors. (e) and (f) Estimates in the presence of domain shape error.

JBO_20_10_105001_f004.png

Domain truncation: the first and second rows in Fig. 4 show estimates x1, δxROI and δx, Eqs. (24) and (25) in the presence of domain truncation, i.e., using a truncated domain as the model domain. The errors in the reconstructions are listed in Table 4.

Unknown optode coupling: the second and third rows in Fig. 4 show estimates x1, δxROI, and δx, Eq. (24) and (25) in the case of an inaccurately known optode coupling. The errors in the reconstructions are listed in Table 4. The coupling error was simulated as follows. Let ζ=(s,δ,d,η)TR512 represent the coupling coefficients of the 16 sources’ and 16 detectors’ amplitude and phases. Let us define a vector valued mapping g(ζ)C256 such that

gk(ζ):=s^pd^q=dpsqexp[j(ηp+δq)],k=(q1)16+p,
where p and q are the source and detector indexes, respectively. The coupling error ϵ(ζ) is given by27
ϵ(ζ)=(Relog[g(ζ)]Imlog[g(ζ)]).

The data were corrupted with coupling error as yi,ϵ(ζ)=yi+ϵ(ζ), i=1,2. In this case, the source and detector amplitude and phase coefficients were drawn from uniform distribution sp, dqU(0.9,1), δp, ηqU(0,π/360). The data used in estimate (24) was y˜=(y1,ϵ(ζ),y2,ϵ(ζ))T and for estimate (25), it was δy=y2,ϵ(ζ)y1,ϵ(ζ).

Unknown object shape: the fourth and fifth rows in Fig. 4 show estimates x1, δxROI, and δx, Eqs. (24) and (25) in the case of using incorrect model domain, i.e., in this case an incorrectly shaped domain (domain obtained using some other segmented adult CT scan) was used as the model domain. The reconstruction errors for this case are not listed in Table 4 since the deformation map of the true optical properties from the measurement domain to the model domain is not known.

From the estimates shown in Fig. 4, we can observe that in most cases, the linear difference reconstruction Eq. (25) is indicative of the location of change, although the spatial resolution is weak. In the estimates obtained using the nonlinear difference imaging Eq. (24), the reconstructed initial state x1 is heavily affected by the modeling errors; however, the estimates δx are relatively free from artifacts. The reason for the modeling errors not affecting significantly the estimates δx lies in the parametrization of x2 as a linear combination of the initial state x1 and the change δx as the unknowns. The variable x1 is invariant between models y1=A(x1)+e1 and y2=A(x1+MδxROI)+e2. Consequently, when the proposed parametrization is used, the errors caused by the (invariant) modeling errors are propagated mainly to the estimate of x1, which consists the parameters that are common for the models of both observations y1 and y2.

3.4.

Reconstructions in Three Dimensions with Experimental Data

3.4.1.

Experimental details

The experiment was carried out with the frequency domain DOT instrument at the Aalto University, Helsinki.58 The measurement domains were cylinders with radius 35 mm and height 110 mm (see Fig. 5). The cylindrical phantoms corresponding to states x1 and x2 are illustrated in Fig. 5. The background optical properties were approximately μa,bg=0.01mm1 and μs,bg=1mm1 at wavelength 800 nm for both phantoms. The cylindrical inclusions in x2, which both have diameter and height of 9.5 mm, are located such that the central planes of the inclusions coincide with the central xy-plane of the cylinder domain. The optical properties of inclusion 1 are approximately μa,inc.1=0.02mm1, μs,inc.1=1mm1 (i.e., purely absorption contrast) and the properties of inclusion 2 are μa,inc.2=0.01mm1, μs,inc.2=2mm1 (i.e., purely scatter contrast), respectively. Absolute imaging reconstructions using the phantom with inclusions are presented in Refs. 53, 58, and 59.

Fig. 5

(a) Photograph of the diffuse optical tomography experiment using one of the phantoms. (b) Cylindrical phantoms x1 and x2 with the position of sources (red circles) and detectors (blue crosses). x1 is a homogeneous reference phantom with optical coefficients approximately μa,bg=0.01mm1 and μs,bg=1mm1 at wavelength 800 nm. x2 has the same background optical coefficients as x1 and it has two additional inclusion with optical coefficients approximately μa,inc.1=0.02mm1, μs,inc.1=1mm1 (i.e., purely absorption contrast) of inclusion 1 and μa,inc.2=0.01mm1, μs,inc.2=2mm1 (i.e., purely scatter contrast) of inclusion 2. The gray patches on the cylinders show the ROI.

JBO_20_10_105001_f005.png

The source and detector configuration in the experiment consisted of 16 sources and 15 detectors arranged in interleaved order on two rings located 6 mm above and below the central xy-plane of the cylinder domain. The locations of sources and detectors are shown with red and blue circles respectively in Fig. 5. The measurements were carried out at 785 nm with an optical power of 8 mW and modulation frequency ω=100MHz/(2π). The log amplitude and phase shift of the transmitted light was recorded and the nearest measurement data from each source position were removed, leading to real valued measurement vectors yiR360, i=1,2. We employed an error model

eN(0,Γei),i=1,2,
where the square roots of the diagonal elements (standard deviations) of error covariances Γei were specified as 1% of the absolute values of yi, i=1,2, implying that the standard deviations of the measurement errors are assumed to be 1% of the measured absolute values of the log(amplitude) and phase.

3.4.2.

Data calibration and initialization of the estimation

Raw instrument data cannot be directly used as a direct equivalent to simulated data from a model. The measured phase and amplitude were calibrated using the procedure described in Ref. 58, accounting for differences between the lengths and coupling efficiencies between different source and detector channels, as well as for the effects of detector gain adjustment during the data collection. Finally, to calibrate the forward model to the measurement setup, the initial estimates for the optical properties of the model were assumed to be homogeneous and the measured data was used to fit global values of absorption and scattering in the model as well as global coupling factors for phase and amplitude between model and measurement. The coupling is intrinsically unknown a priori; therefore, the measured data are corrected by the coupling coefficient obtained from this optimization process. In a sense, the instrument data are calibrated to match the absolute values of the simulations from the forward model. However, to avoid unrealistical calibration setup of having access to data from the homogeneous target, the four-parameter calibration was carried out using the data y2, corresponding to the nonhomogeneous state after the change. Following the initial estimation procedure in Ref. 59, the log of source strength and phase coupling between the modeled phase and log amplitude, and the measured phase and log amplitude were modeled by additive constants. In other words, we assumed that the coupling factors are constant for all source and detector fibers. Thus, the initialization step consisted of a four-parameter fit of global background parameters μa,0 and μs,0 as well as a global additive shift η of log amplitude data and a global additive shift φ of phase data

{μa,0,μs,0,η,φ}=argminμa,0,μs,0,η,φ{Le2[(y2+Δy)A(μa,0,μs,0)]2},
where Δy=(η,φ)T. The initialization problem was solved by a GN optimization method with an explicit line search.53 Once the initialization was completed, the measurement data were transformed for the nonlinear estimation Eq. (24) by the recovered global offsets as y˜=(y1+Δy,y2+Δy)T, and the initial parameter values for the nonlinear estimation were set to the estimated values μa,0 and μs,0.

3.4.3.

Results

Figure 6 shows 2-D slices of 3-D reconstructions obtained using nonlinear difference imaging Eq. (24). The ROI in this case was selected as a central part of height z=22mm of the cylinder (see Fig. 5). Figs. 6(a) and 6(b) are the absorption images and Figs. 6(c) and 6(d) are the scattering images. The computation time of the nonlinear estimate was tCPU=3247.68s. Figure 7 shows the corresponding 3-D reconstructions with linear difference imaging Eq. (25). The computation time of the linear estimate was tCPU=87.95s.

Fig. 6

Estimates using nonlinear difference imaging: (a), (b) μa and (c), (d) μs. (a) and (c) The horizontal slices of the phantom along z-axis at z=22,55,99mm. (b) Vertical slices along x-axis at x=14,35,63mm. (d) Vertical slices along y-axis at y=14,35,63mm. The blank (white) areas are regions outside ROI.

JBO_20_10_105001_f006.png

Fig. 7

Estimates using linear difference imaging. (a), (b) μa and (c), (d) μs. (a) and (c) Horizontal slices of the phantom (height 110 mm, diameter 70 mm) along z-axis at z=22,55,99mm. (b) Vertical slices along x-axis at x=14,35,63mm. (d) Vertical slices along y-axis at y=14,35,63mm.

JBO_20_10_105001_f007.png

From Figs. 6 and 7, one can see that the proposed approach of nonlinear difference imaging shows better recovery of the inclusions when compared to the conventional linear difference imaging. Also, there is no “ringing” effect in nonlinear reconstruction, while there are severe ones in linear reconstruction. These could result in a masking effect when more target inclusions are present. These results are in agreement with the 2-D simulation results.

In the 3-D case, the CPU time needed for the reconstruction with the nonlinear approach was approximately 36 times the time needed for the linear reconstruction. This can be potentially problematic when processing long time series of data, especially if the reconstructions are needed online. However, in many cases the requirement of better reconstruction accuracy may outweigh the disadvantage of longer CPU time, especially if it is sufficient to obtain the results off-line. It should be also noted that for both the linear and nonlinear approach the reported CPU times are based on nonoptimized implementation on MATLAB® using the TOAST toolbox. Where needed, the computations can be made faster by optimizing the implementation.

4.

Conclusions

We applied a new approach to difference imaging in DOT. In the approach, the optical coefficients after a change are parameterized as a linear combination of the (unknown) initial state and the change in optical properties. The DOT measurements taken before and after the change are concatenated into a single measurement vector and the inverse problem is stated as finding the initial optical coefficients and the change given the combined data. This model allows the use of separate spatial models for the initial state and the change in optical coefficients. Furthermore, it allows the use of a ROI constraint for the change in optical coefficients in a straightforward way. The approach was also tested with 2-D simulations using different optode placement settings and also in the presence of modeling errors arising from domain truncation, unknown optode coupling and unknown domain shape. The conventional linearized difference imaging was used as a reference approach. The approach was tested with experimental phantom data. The results show that the proposed approach produces better reconstructions compared to standard linear difference imaging and the approach is robust for the modeling errors at least in a similar extent as the conventional linear reconstruction approach. We believe that the proposed approach can improve the accuracy of difference imaging compared to the linearization-based approach, which is the current standard in difference imaging.

Acknowledgments

This work was supported by the Academy of Finland (projects 136220, 272803, 269282, and 250215 Finnish Centre of Excellence in Inverse Problems Research) and Finnish Doctoral Programme in Computational Sciences. The authors would also like to acknowledge Dong Liu for useful discussions.

References

1. 

J. P. Culver et al., “Volumetric diffuse optical tomography of brain activity,” Opt. Lett., 28 (21), 2061 –2063 (2003). http://dx.doi.org/10.1364/OL.28.002061 OPLEDP 0146-9592 Google Scholar

2. 

J. C. Hebden et al., “Three-dimensional optical tomography of the premature infant brain,” Phys. Med. Biol., 47 (23), 4155 –4166 (2002). http://dx.doi.org/10.1088/0031-9155/47/23/303 PHMBA7 0031-9155 Google Scholar

3. 

T. Näsi et al., “Effect of task-related extracerebral circulation on diffuse optical tomography: experimental data and simulations on the forehead,” Biomed. Opt. Express, 4 (3), 412 –426 (2013). http://dx.doi.org/10.1364/BOE.4.000412 BOEICL 2156-7085 Google Scholar

4. 

B. J. Tromberg et al., “Assessing the future of diffuse optical imaging technologies for breast cancer management,” Med. Phys., 35 (6), 2443 (2008). http://dx.doi.org/10.1118/1.2919078 MPHYA6 0094-2405 Google Scholar

5. 

D. Leff et al., “Diffuse optical imaging of the healthy and diseased breast: a systematic review,” Breast Cancer Res. Treat., 108 (1), 9 –22 (2008). http://dx.doi.org/10.1007/s10549-007-9582-z BCTRD6 Google Scholar

6. 

G. Gulsen et al., “Combined diffuse optical tomography (DOT) and MRI system for cancer imaging in small animals,” Technol. Cancer Res. Treat., 5 (4), 351 –363 (2006). http://dx.doi.org/10.1177/153303460600500407 Google Scholar

7. 

J. P. Culver et al., “Diffuse optical tomography of cerebral blood flow, oxygenation and metabolism in rat during focal ischemia,” J. Cereb. Blood Flow Metab., 23 (8), 911 –924 (2003). http://dx.doi.org/10.1097/01.WCB.0000076703.71231.BB Google Scholar

8. 

S. R. Hintz et al., “Bedside functional imaging of the premature infant brain during passive motor activation,” J. Perinat. Med., 29 (4), 335 –343 (2001). http://dx.doi.org/10.1515/JPM.2001.048 Google Scholar

9. 

A. Gibson et al., “Three-dimensional whole-head optical tomography of passive motor evoked responses in the neonate,” NeuroImage, 30 (2), 521 –528 (2006). http://dx.doi.org/10.1016/j.neuroimage.2005.08.059 NEIMEF 1053-8119 Google Scholar

10. 

J. C. Hebden et al., “Imaging changes in blood volume and oxygenation in the newborn infant brain using three-dimensional optical tomography,” Phys. Med. Biol., 49 (7), 1117 (2004). http://dx.doi.org/10.1088/0031-9155/49/7/003 PHMBA7 0031-9155 Google Scholar

11. 

T. Zhang et al., “Pre-seizure state identified by diffuse optical tomography,” Sci. Rep., 4 3798 (2014). Google Scholar

12. 

A. T. Eggebrecht et al., “A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping,” NeuroImage, 61 (4), 1120 –1128 (2012). http://dx.doi.org/10.1016/j.neuroimage.2012.01.124 NEIMEF 1053-8119 Google Scholar

13. 

Y. Zhan et al., “Image quality analysis of high-density diffuse optical tomography incorporating a subject-specific head model,” Front. Neuroenerg., 4 (6), (2012). http://dx.doi.org/10.3389/fnene.2012.00006 Google Scholar

14. 

E. M. C. Hillman et al., “Calibration techniques and datatype extraction for time-resolved optical tomography,” Rev. Sci. Instrum., 71 (9), 3415 (2000). http://dx.doi.org/10.1063/1.1287748 RSINAK 0034-6748 Google Scholar

15. 

C. H. Schmitz et al., “Instrumentation and calibration protocol for imaging dynamic features in dense-scattering media by optical tomography,” Appl. Opt., 39 (34), 6466 –6486 (2000). http://dx.doi.org/10.1364/AO.39.006466 APOPAI 0003-6935 Google Scholar

16. 

D. Boas, T. Gaudette and S. Arridge, “Simultaneous imaging and optode calibration with diffuse optical tomography,” Opt. Express, 8 (5), 263 –270 (2001). http://dx.doi.org/10.1364/OE.8.000263 OPEXFF 1094-4087 Google Scholar

17. 

M. Schweiger et al., “Image reconstruction in optical tomography in the presence of coupling errors,” Appl. Opt., 46 (14), 2743 –2756 (2007). http://dx.doi.org/10.1364/AO.46.002743 APOPAI 0003-6935 Google Scholar

18. 

X. Wu et al., “Quantitative evaluation of atlas-based high-density diffuse optical tomography for imaging of the human visual cortex,” Biomed. Opt. Express, 5 (11), 3882 –3900 (2014). http://dx.doi.org/10.1364/BOE.5.003882 BOEICL 2156-7085 Google Scholar

19. 

A. Gibson et al., “Optical tomography of a realistic neonatal head phantom,” Appl. Opt., 42 (16), 3109 –3116 (2003). http://dx.doi.org/10.1364/AO.42.003109 APOPAI 0003-6935 Google Scholar

20. 

A. H. Barnett et al., “Robust inference of baseline optical properties of the human head with three-dimensional segmentation from magnetic resonance imaging,” Appl. Opt., 42 (16), 3095 –3108 (2003). http://dx.doi.org/10.1364/AO.42.003095 APOPAI 0003-6935 Google Scholar

21. 

B. W. Pogue and K. D. Paulsen, “High-resolution near-infrared tomographic imaging simulations of the rat cranium by use of a priori magnetic resonance imaging structural information,” Opt. Lett., 23 (21), 1716 –1718 (1998). http://dx.doi.org/10.1364/OL.23.001716 OPLEDP 0146-9592 Google Scholar

22. 

J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Springer, New York (2005). Google Scholar

23. 

S. R. Arridge et al., “Approximation errors and model reduction with an application in optical diffusion tomography,” Inverse Probl., 22 (1), 175 –195 (2006). http://dx.doi.org/10.1088/0266-5611/22/1/010 Google Scholar

24. 

T. Tarvainen et al., “An approximation error approach for compensating for modelling errors between the radiative transfer equation and the diffusion approximation in diffuse optical tomography,” Inverse Probl., 26 (1), 015005 (2010). http://dx.doi.org/10.1088/0266-5611/26/1/015005 Google Scholar

25. 

T. Tarvainen et al., “Corrections to linear methods for diffuse optical tomography using approximation error modelling,” Biomed. Opt. Express, 1 (1), 209 –222 (2010). http://dx.doi.org/10.1364/BOE.1.000209 BOEICL 2156-7085 Google Scholar

26. 

V. Kolehmainen et al., “Marginalization of uninteresting distributed parameters in inverse problems–application to diffuse optical tomography,” Int. J. Uncertainty Quantif., 1 (1), 1 –17 (2011). http://dx.doi.org/10.1615/Int.J.UncertaintyQuantification.v1.i1.10 Google Scholar

27. 

M. Mozumder et al., “Compensation of optode sensitivity and position errors in diffuse optical tomography using the approximation error approach,” Biomed. Opt. Express, 4 (10), 2015 –2031 (2013). http://dx.doi.org/10.1364/BOE.4.002015 BOEICL 2156-7085 Google Scholar

28. 

M. Mozumder et al., “Compensation of modeling errors due to unknown domain boundary in diffuse optical tomography,” J. Opt. Soc. Am. A, 31 (8), 1847 –1855 (2014). http://dx.doi.org/10.1364/JOSAA.31.001847 JOAOD6 0740-3232 Google Scholar

29. 

Y. Pei, F. B. Lin and R. Barbour, “Modeling of sensitivity and resolution to an included object in homogeneous scattering media and in MRI-derived breast maps,” Opt. Express, 5 (10), 203 –219 (1999). http://dx.doi.org/10.1364/OE.5.000203 OPEXFF 1094-4087 Google Scholar

30. 

A. Bluestone et al., “Three-dimensional optical tomography of hemodynamics in the human head,” Opt. Express, 9 (6), 272 –286 (2001). http://dx.doi.org/10.1364/OE.9.000272 OPEXFF 1094-4087 Google Scholar

31. 

Y. Pei, H. L. Graber and R. L. Barbour, “Influence of systematic errors in reference states on image quality and on stability of derived information for DC optical imaging,” Appl. Opt., 40 (31), 5755 –5769 (2001). http://dx.doi.org/10.1364/AO.40.005755 APOPAI 0003-6935 Google Scholar

32. 

T. Austin et al., “Three dimensional optical imaging of blood volume and oxygenation in the neonatal brain,” NeuroImage, 31 (4), 1426 –1433 (2006). http://dx.doi.org/10.1016/j.neuroimage.2006.02.038 NEIMEF 1053-8119 Google Scholar

33. 

R. L. Barbour et al., “Optical tomographic imaging of dynamic features of dense-scattering media,” J. Opt. Soc. Am. A, 18 (12), 3018 –3036 (2001). http://dx.doi.org/10.1364/JOSAA.18.003018 JOAOD6 0740-3232 Google Scholar

34. 

Y. Xu, H. L. Graber and R. L. Barbour, “Image correction algorithm for functional three-dimensional diffuse optical tomography brain imaging,” Appl. Opt., 46 (10), 1693 –1704 (2007). http://dx.doi.org/10.1364/AO.46.001693 APOPAI 0003-6935 Google Scholar

35. 

R. L. Barbour et al., “Strategies for imaging diffusing media,” Transp. Theory Stat. Phys., 33 (3–4), 361 –371 (2004). http://dx.doi.org/10.1081/TT-200051950 TTSPB4 0041-1450 Google Scholar

36. 

H. L. Graber et al., “Spatial deconvolution technique to improve the accuracy of reconstructed three-dimensional diffuse optical tomographic images,” Appl. Opt., 44 (6), 941 –953 (2005). http://dx.doi.org/10.1364/AO.44.000941 APOPAI 0003-6935 Google Scholar

37. 

Y. Xu et al., “Improved accuracy of reconstructed diffuse optical tomographic images by means of spatial deconvolution: two-dimensional quantitative characterization,” Appl. Opt., 44 (11), 2115 –2139 (2005). http://dx.doi.org/10.1364/AO.44.002115 APOPAI 0003-6935 Google Scholar

38. 

Y. Xu et al., “Image quality improvement via spatial deconvolution in optical tomography: time-series imaging,” J. Biomed. Opt., 10 (5), 051701 (2005). http://dx.doi.org/10.1117/1.2103747 JBOPFO 1083-3668 Google Scholar

39. 

H. L. Graber, Y. Xu and R. L. Barbour, “Image correction scheme applied to functional diffuse optical tomography scattering images,” Appl. Opt., 46 (10), 1705 –1716 (2007). http://dx.doi.org/10.1364/AO.46.001705 APOPAI 0003-6935 Google Scholar

40. 

J. Heiskala, P. Hiltunen and I. Nissilä, “Significance of background optical properties, time-resolved information and optode arrangement in diffuse optical imaging of term neonates,” Phys. Med. Biol., 54 (3), 535 (2009). http://dx.doi.org/10.1088/0031-9155/54/3/005 PHMBA7 0031-9155 Google Scholar

41. 

D. Liu et al., “A nonlinear approach to difference imaging in EIT; assessment of the robustness in the presence of modelling errors,” Inverse Probl., 31 (3), 035012 (2015). http://dx.doi.org/10.1088/0266-5611/31/3/035012 INPEEY 0266-5611 Google Scholar

42. 

D. Liu et al., “Estimation of conductivity changes in a region of interest with electrical impedance tomography,” Inverse Probl. Imaging, 9 (1), 211 –229 (2015). http://dx.doi.org/10.3934/ipi.2015.9.211 Google Scholar

43. 

A. Ishimaru, Wave Propagation and Scattering in Random Media, Academic, New York (1978). Google Scholar

44. 

S. Arridge, “Optical tomography in medical imaging,” Inverse Probl., 15 (2), R41 –R93 (1999). http://dx.doi.org/10.1088/0266-5611/15/2/022 INPEEY 0266-5611 Google Scholar

45. 

A. Douiri et al., “Anisotropic diffusion regularization methods for diffuse optical tomography using edge prior information,” Meas. Sci. Technol., 18 (1), 87 (2007). http://dx.doi.org/10.1088/0957-0233/18/1/011 MSTCEP 0957-0233 Google Scholar

46. 

P. K. Yalavarthy et al., “Structural information within regularization matrices improves near infrared diffuse optical tomography,” Opt. Express, 15 (13), 8043 –8058 (2007). http://dx.doi.org/10.1364/OE.15.008043 OPEXFF 1094-4087 Google Scholar

47. 

K. D. Paulsen and H. Jiang, “Enhanced frequency-domain optical image reconstruction in tissues through total-variation minimization,” Appl. Opt., 35 (19), 3447 –3458 (1996). http://dx.doi.org/10.1364/AO.35.003447 APOPAI 0003-6935 Google Scholar

48. 

V. Kolehmainen, “Novel approaches to image reconstruction in diffusion tomography,” University of Kuopio, Kuopio, Finland, (2001). Google Scholar

49. 

N. Cao, A. Nehorai and M. Jacobs, “Image reconstruction for diffuse optical tomography using sparsity regularization and expectation-maximization algorithm,” Opt. Express, 15 (21), 13695 –13708 (2007). http://dx.doi.org/10.1364/OE.15.013695 OPEXFF 1094-4087 Google Scholar

50. 

J. F. P.-J. Abascal et al., “Fluorescence diffuse optical tomography using the split Bregman method,” Med. Phys., 38 (11), 6275 –6284 (2011). http://dx.doi.org/10.1118/1.3656063 MPHYA6 0094-2405 Google Scholar

51. 

J. C. Ye et al., “Optical diffusion tomography by iterative-coordinate-descent optimization in a Bayesian framework,” J. Opt. Soc. Am. A, 16 (10), 2400 –2412 (1999). http://dx.doi.org/10.1364/JOSAA.16.002400 JOAOD6 0740-3232 Google Scholar

52. 

D. A. Boas, A. M. Dale and M. A. Franceschini, “Diffuse optical imaging of brain activation: approaches to optimizing image sensitivity, resolution, and accuracy,” NeuroImage, 23 (Suppl 1), S275 –S288 (2004). http://dx.doi.org/10.1016/j.neuroimage.2004.07.011 NEIMEF 1053-8119 Google Scholar

53. 

M. Schweiger, S. R. Arridge and I. Nissilä, “Gauss-Newton method for image reconstruction in diffuse optical tomography,” Phys. Med. Biol., 50 (10), 2365 –2386 (2005). http://dx.doi.org/10.1088/0031-9155/50/10/013 PHMBA7 0031-9155 Google Scholar

54. 

J. Heiskala et al., “Approximation error method can reduce artifacts due to scalp blood flow in optical brain activation imaging,” J. Biomed. Opt., 17 (9), 096012 (2012). http://dx.doi.org/10.1117/1.JBO.17.9.096012 JBOPFO 1083-3668 Google Scholar

55. 

L. I. Rudin, S. Osher and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D Nonlinear Phenom., 60 (1–4), 259 –268 (1992). http://dx.doi.org/10.1016/0167-2789(92)90242-F Google Scholar

56. 

Y. Mamatjan et al., “An experimental clinical evaluation of EIT imaging with l-1 data and image norms,” Physiol. Meas., 34 (9), 1027 (2013). http://dx.doi.org/10.1088/0967-3334/34/9/1027 PMEAE3 0967-3334 Google Scholar

57. 

A. S. K. Karhunen et al., “Electrical resistance tomography for assessment of cracks in concrete,” ACI Mater. J., 107 (5), 523 –531 (2010). Google Scholar

58. 

I. Nissilä et al., “Instrumentation and calibration methods for the multichannel measurement of phase and amplitude in optical tomography,” Rev. Sci. Instrum., 76 (4), 044302 (2005). http://dx.doi.org/10.1063/1.1884193 RSINAK 0034-6748 Google Scholar

59. 

V. Kolehmainen et al., “Approximation errors and model reduction in three-dimensional diffuse optical tomography,” J. Opt. Soc. Am. A., 26 (10), 2257 –2268 (2009). http://dx.doi.org/10.1364/JOSAA.26.002257 JOAOD6 0740-3232 Google Scholar

Biography

Meghdoot Mozumder is a doctoral student at the University of Eastern Finland. He received his MSc degree in physics from the Indian Institute of Technology, Kanpur, in 2011. His current research interests include optical imaging and inverse problems.

Tanja Tarvainen is an academy research fellow at the University of Eastern Finland. She received her PhD in 2006 from the University of Kuopio. She is the author of more than 30 journal papers. Her current research interests include diffuse optical tomography, quantitative photoacoustic tomography, and Bayesian inverse problems.

Aku Seppänen is an academy research fellow at the University of Eastern Finland. He received his PhD degree in 2006 from the University of Kuopio. He has authored 27 journal papers and two book chapters. His research focuses on inverse problems, with applications to imaging, nondestructive testing, and remote sensing.

Ilkka Nissilä is an Academy Research Fellow at Aalto University. He received his DSc degree at Helsinki University of Technology in 2004. His main research interests includes the development of diffuse optical tomography in combination with other modalities for the neuroimaging of children.

Simon R. Arridge is a professor of image processing at the Department of Computer Science and the director for the Centre for Inverse Problems at University College London. He received his PhD in 1990 from University College London. He is an author on more than 150 journal papers. His research interests include inverse problems methods, image reconstruction, regularization, modeling, and numerical methods.

Ville Kolehmainen is an associate professor at the University of Eastern Finland. He received his PhD in 2001 from the University of Kuopio. He is the author of more than 50 journal papers and has written four book chapters. His current research interests include computational inverse problems, especially in tomography imaging.

© 2015 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2015/$25.00 © 2015 SPIE
Meghdoot Mozumder, Tanja Tarvainen, Aku Seppänen, Ilkka Nissilä, Simon R. Arridge, and Ville Kolehmainen "Nonlinear approach to difference imaging in diffuse optical tomography," Journal of Biomedical Optics 20(10), 105001 (6 October 2015). https://doi.org/10.1117/1.JBO.20.10.105001
Published: 6 October 2015
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Nonlinear optics

Data modeling

Error analysis

Diffuse optical imaging

Diffuse optical tomography

Sensors

Optical imaging

Back to Top