Open Access
13 January 2016 Accelerated image reconstruction in fluorescence molecular tomography using a nonuniform updating scheme with momentum and ordered subsets methods
Dianwen Zhu, Changqing Li
Author Affiliations +
Abstract
Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. It remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a nonuniform multiplicative updating algorithm that combines with the ordered subsets (OS) method for fast convergence. However, increasing the number of OS leads to greater approximation errors and the speed gain from larger number of OS is limited. We propose to further enhance the convergence speed by incorporating a first-order momentum method that uses previous iterations to achieve optimal convergence rate. Using numerical simulations and a cubic phantom experiment, we have systematically compared the effects of the momentum technique, the OS method, and the nonuniform updating scheme in accelerating the FMT reconstruction. We found that the proposed combined method can produce a high-quality image using an order of magnitude less time.

1.

Introduction

Fluorescence molecular tomography (FMT) has been an important tool for preclinical imaging in the past two decades and has attracted a lot of research interests.1,2 In FMT, fluorescent agents (e.g., fluorophores) are injected into the object such as small animals. Upon illumination by near-infrared laser beams on the object surface, the fluorophores then become excited and start emitting fluorescence photons, which propagate out of the object surface and are captured by detectors such as charge-coupled device (CCD) cameras. Due to the high-scattering effects of photons in tissues, the FMT system matrix is usually ill-conditioned and the reconstruction problem ill-posed. Immense efforts have been devoted to experimentally alleviating the ill-posedness such as collecting more measurement data by using dense noncontact CCD cameras, employing multispectral wavelengths for both excitation and emission,35 and using different illumination patterns.68 These indeed improved the image resolutions. However, they led to the processing of large amounts of three-dimensional (3-D) data, which requires tremendous computational time. From the theoretical aspect, a plethora of nonlinear regularization methods has also been applied,917 which again have caused additional complexity in computations. The non-negativity constraint in FMT reconstruction is another challenge. This constraint has been typically handled by a backtracking line search strategy,14 which is time consuming for large-scale measurement data.

To reduce the computational burden, many ideas from other fields have been introduced to FMT. Fourier transform,18 wavelet transform methods,19,20 discrete cosine transform,21 and principal component analysis methods22 have been adopted to compress the data and reduce its dimension. The ordered subsets (OS) method,23 originally introduced to speed up emission/transmission tomography by breaking down large system matrix into smaller blocks, has recently been successfully adopted into FMT by employing the majorization-minimization (MM) framework.14,17,24,25 In contrast with other methods that need a slow line search in handling the non-negativity constraint, the MM framework has the advantage of reducing the optimization into separable one-dimensional problems and hence the non-negativity constraint can be enforced straightforwardly in a parallel way.26 In particular, the nonuniform multiplicative MM algorithm with OS acceleration (NUMOS) that we proposed recently has been found to significantly improve the FMT image reconstruction speed and quality.25 Nevertheless, the MM algorithm is a first-order method and typically needs a lot of iterations to converge.

Another direction in reducing the computational cost is to improve the convergence rate of the FMT reconstruction algorithms. Compared with first-order methods, second-order methods such as Newton-type algorithms have faster convergence; however, the computation of Hessian is very demanding. Recently a class of momentum acceleration techniques from Nesterov has been found to achieve optimal convergence rate for a class of gradient descent-based methods whose computational complexities are at linear level.27,28 In FMT, Han et al.11 applied recently the fast iterative shrinkage thresholding algorithm (FISTA),29 which is based on the 1983 version of the Nesterov’s technique.27 Momentum methods have been combined with the OS techniques for an even faster convergence in computed tomography (CT) very recently.30

In this paper, we propose to include the 2005 version of Nesterov’s momentum method28 into our NUMOS algorithm for a faster and stable convergence, since the 1983 version was not as stable.30 The concept and some preliminary results of this paper have appeared earlier in a conference paper.31 Note that our work is different from the aforementioned work in CT in two ways: first, the regularization they used is an edge-preserving Huber type of function that promotes local smoothness whereas our model for FMT uses L1 to promote sparsity; second, their surrogate functions are either a spatially uniform type or a nonuniform type that depends on a good estimate of the true solution, which is readily available in CT through the filtered back-projection reconstruction,30 but not in FMT.

The rest of the paper is organized as follows. In Sec. 2, we first introduce the background of FMT forward modeling, regularized reconstruction, the MM framework, the OS technique, and the Nesterov’s momentum method. Then we propose our fast NUMOS (fNUMOS) algorithm. In Sec. 3, we present both numerical simulation and phantom experiment results and comparison of fNUMOS with other methods. In Sec. 4, we conclude the paper with summary and some discussions.

2.

Methods

2.1.

Forward Modeling and Regularized Fluorescence Molecular Tomography Reconstruction

For FMT in the continuous-wave domain, photon transfer is modeled by the following coupled diffusion equations, along with Robin-type (mixed) boundary conditions:

Eq. (1)

{·[Dex(r)Φex(r)]+μa,ex(r)Φex(r)=Δs(r)n·[Dex(r)Φex(r)]+αexΦex(r)=0·[Dem(r)Φem(r)]+μa,em(r)Φem(r)=Φex(r)x(r)n·[Dem(r)Φem(r)]+αemΦem(r)=0,
where denotes the gradient operator, Dex(r)={3[μs,ex(r)+μa,ex(r)]}1 and Dem(r)={3[μs,em(r)+μa,em(r)]}1, with μa,ex(r) and μa,em(r) being the absorption coefficients and μs,ex(r) and μs,em(r) being the reduced scattering coefficients at excitation and emission wavelengths, Φex(r) and Φem(r) are the photon densities, r is the location vector, Δs is determined by the s’th illumination pattern [for example, point sources δs(rrs)], x(r) is the product of the unknown fluorescence dye concentration and the quantum yield at each node to be reconstructed, n is the outward unit normal vector of the boundary, and αex and αem are the Robin boundary coefficients.

The above equations can be discretized by the finite element method (FEM), leading to linearized equations:4,32

Eq. (2)

[Kex][Φex]=[δs(rrs)][Kem][Φem]=[Φex][x],
where Kex and Kem are the stiffness matrices. Then we obtain

Eq. (3)

[Φem]=[Kem1][Kex1][x],
which, upon the removal of equations, corresponding to the unmeasurable nodes, becomes

Eq. (4)

Ax=b,
where A=(aij)RNm×Nn, aij>0 is the system matrix, obtained by taking the tensor product of the sensitivity matrix [Kem1] and the excitation matrix [Kex1], x=(xj)RNn×1 is the fluorophore distribution to be reconstructed, b=(bi)RNm×1 is the measurements, and Nm and Nn are the total number of measurements and FEM mesh nodes, respectively.

A typical solution of Eq. (4) is obtained by minimizing the following regularized squared data-measurement misfit under the non-negativity constraint:

Eq. (5)

x^=argminx,x0Ψ(x):=12Axb22+λx1,
where λ is the L1 regularization parameter and x1=j=1Nnxj represents the L1 regularization when x0. Notice that there are other popular choices of the regularization function R(x), including the Lp (semi-) norm: R(x)=xpp, p0.

2.2.

Majorization-Minimization Algorithm

The MM algorithm, also known as optimization transfer algorithm, is a general framework for solving minimization problems where an approximation of the objective function, often referred to as the majorization or surrogate function, is minimized at every step. The approximated solution using MM algorithm will converge to the true solution of Eq. (5) as the problem is convex.33 MM algorithm has known advantages in optimization problems including avoiding matrix inversions, linearizing an optimization problem, dealing gracefully with inequalities, and so on.33 Many powerful algorithms can be understood from the MM point of view, especially the gradient-based methods such as iterative shrinkage thresholding algorithm,34 iteratively reweighted L1 algorithm,35 and iteratively reweighted least squares.36

The definition of a surrogate function Ψsur(x) in the minimization problem includes the following three requirements:

Eq. (6)

{Ψsur(x)Ψ(x),for all  x;Ψsur(xn)=Ψ(xn),at some point  xn;Ψsur(xn)=Ψ(xn),at point  xn.
It is generally believed that in MM framework, there is no optimal way of choosing the surrogate functions.25 In the popular separable quadratic surrogates (SQS) algorithm,37,38 the surrogate function Ψk(x;xk) was chosen based on a Jensen inequality for least squares, which was first proposed by de Pierro.39 Adding in the L1 regularization, we have

Eq. (7)

Ψ(x)=12bAx22+λx1Ψk(x;xk):=12i=1Nmj=1Nnβij[bi(Axk)iaijβij(xjxjk)]2+λj=1Nnxj=Ψ(xk)j=1Nn{i=1Nm[bi(Axk)i]*aijλ}*(xjxjk)+12j=1Nn(i=1Nmaij2βij)(xjxjk)2=Ψ(xk)+Ψ(xk)·(xxk)+12xxkDk2,
where βij>0 with j=1Nnβij=1 and Dk=diag(djk) with djk=i=1Nmaij2βij. The above surrogate function can be minimized by choosing

Eq. (8)

xjk+1={xjk[Ψ(xk)]j/djk}+,
for each component xj, where {·}+=max(0,·), representing the positive part of any function.17 This is a gradient descent-type algorithm with different step size dj for each component xj.

2.3.

Nonuniform Weighting Parameter βij

The choice of the weighting parameter βij for the Jensen inequality is crucial in determining how well the surrogate approximates the original objective function and also how fast the corresponding iterative algorithm [Eq. (8)] converges. Traditionally, the SQS was chosen in a uniform additive way, which can be precomputed and needs no iterative updates:14,17,37,38

Eq. (9)

βijA=aijl=1Nnail.
The corresponding step size is

Eq. (10)

dj=(AtA1Nn)j,
where 1Nn is the Nn-dimensional vector with all entries equal 1, and the update equation [Eq. (8)] becomes

Eq. (11)

xj,L1A,k+1=[xjk+(Atb)j(AtAxk)jλ1(AtA1Nn)j]+.
Note the uniform additive type of βij has a more general form as follows:25

Eq. (12)

βijgA=aijqk=1Nnaikq,q>0.

Uniform weighting, however, ignores the different updating needs at different positions, which could hinder the convergence of iterations to the true solution.40 Nonuniform weighting strategy has been reported recently to improve image qualities in CT.40,41 In our previous work, we proposed to solve the L1 regularized least squares optimization problem [Eq. (5)] by using the following nonuniform multiplicative type βij:26,38,42

Eq. (13)

βijM=aijxjk(Axk)i,where  (Axk)i=l=1Nnailxlk.
In comparison with the uniform additive βijA that only needs to be calculated once, this nonuniform βij needs to be updated at each iteration. However, we noticed that with the nonuniform type of βijM, we have26

Eq. (14)

djk=(AtAxk)jxjk
and the associated iterative update equation [Eq. (8)] becomes

Eq. (15)

xj,L1k+1=xjk[(Atb)jλ1]+(AtAxk)j,
which is much simpler and requires less calculations when compared with the additive form of Eq. (9). Equation (15) also naturally promotes non-negativity and sparsity, since once xjk=0, xj remains 0.

2.4.

Ordered Subsets Acceleration Technique

Due to the large scale of 3-D data, tomographic imaging requires a lot of computational time. The OS technique was proposed in 1994 for emission/transmission tomography to evenly break-down large matrices into smaller blocks, so that a speed up of the convergence by a factor proportional to nOS, the number of subsets, is possible.23,38 In FMT, the OS technique has been successfully applied recently, where the sensitivity matrix has been divided into OS.14,26 In particular, combining the nonuniform update with this OS technique, we have proposed NUMOS (see Algorithm 1) for FMT, which has provided significant speed enhancement over the uniform methods,14,17 while maintaining high-quality reconstruction results.26

Algorithm 1

NUMOS.

Initialization: x0=x0*1n, B=(Atbλ/nOS*1Nn)+
fork=1 to Nmax/nOSdo
 Divide A and B into nOS submatrices, {Ai}i=1nOS, {Bi}i=1nOS, based on a random partition of the Nd detectors
fori=1 to nOS do
l=(k1)*nOS+i
xk+1=Bi.*xk./(AitAixk)
end for
end for
where 0<x0<1 is randomly picked, Nmax is the number of iterations, nOS is the number of subsets, .* and ./ are the entry-wise multiplication and division, respectively.

However, there are a few known issues with the OS technique. One is that the selection of OS can be difficult when the geometry is complicated. In the study of mouse-shaped numerical phantom, we balanced our selection of subsets by randomly generating them at each iteration, which slowed down the acceleration from the OS technique.26 Another issue with this OS technique is that, like other block-iterative methods, increasing the number of OS leads to larger approximation errors, causing the convergence of iterative updates to stop at a limit-cycle before approaching the minimum.43 Different approaches have been proposed to address this limit-cycle issue, but again at the expense of slowing down the overall convergence.4446

2.5.

Nesterov’s Momentum Acceleration Techniques

Another approach to reduce the computational burden is to design improved algorithms with faster convergence rates. Recently, a series of momentum techniques of Nesterov27,28 have emerged, using previous iterations to obtain optimal convergence rates for first-order optimization methods. In particular, based on the 1983 version of the Nesterov’s technique, the algorithm FISTA29 was developed, the application of which in FMT will be analyzed in Sec. 4. We chose to focus on the 2005 version of the Nesterov’s technique28 (see Algorithm 2) in this paper, since it provides more stability without requiring much extra calculations than the 1983 version.27,30 This 2005 Nesterov’s technique has attracted quite some attention recently.30,47 In particular, Kim et al.30 has considered the combination of this technique with the OS method in CT26.

Algorithm 2

Nesterov’s algorithm (2005).

Initialization: x0=v0=z0, t0=1
form=1 to Nmaxdo
tm=(m+1)/2
xm=[zm11LΨ(zm1)]+
vm=[z01Ll=1mtl1*Ψ(zl1)]+
zm=(1tml=0mtl)*xm+tml=0mtl*vm
end for
where L is the Lipschitz constant for the surrogate function of Ψ(z) at zm1.

2.6.

Proposed Nonuniform Multiplicative Updating Scheme Accelerated by Ordered Subsets and Momentum Method

The Nesterov’s techniques are based on the gradient method, which from the MM algorithm point of view, is equivalent to using the following surrogate function for Ψ(x):29

Eq. (16)

Ψk(x;xk)=Ψ(xk)+Ψ(xk)·(xxk)+L2xxk2,
where L is the Lipschitz constant and is typically chosen to be the largest eigenvalue of the system matrix A, the computation of which is very challenging for large-scale problems.

The above choice of surrogate function [Eq. (16)] is essentially assigning every component xj the same iterative step size, which does not take into account the different updating needs between target and background locations.40 In this paper, we proposed to employ the nonuniform update to satisfy the different updating needs and combine with OS method and the 2005 version of the Nesterov’s momentum technique for much faster convergence. We refer to this new algorithm as fNUMOS (see Algorithm 3). Note that for fNUMOS, xPMm is new term we have to introduce, so that the nonuniform technique can be combined with the Nesterov’s momentum technique. In addition, we followed the method of Tseng48 to choose the weighting parameters tm, which leads to a faster convergence than the original tm as used by Nesterov in Algorithm 2.

Algorithm 3

fNUMOS.

Initialization: x0=x0*1n, z0=x0, t0=1, B=Atbλ/nOS*1Nn
fork=1 to Nmax/nOSdo
Divide A and B into nOS submatrices, {Ai}i=1nOS, {Bi}i=1nOS, based on a random partition of the Nd detectors
fori=1 to nOS do
m=(k1)*nOS+i
tm=[1+1+4(tm1)2]/2
xPMm=Bi.*zm1./(AitAizm1)
xm=(xPM)+
vm=[z0l=1mtl1*(xPMlzl1)]+
zm=(1tml=0mtl)*xm+tml=0mtl*vm
end for
where 0<x0<1 is randomly picked, Nmax is the number of iterations, nOS is number of subsets, .* and ./ are the entry-wise multiplication and division, respectively.
end for

2.7.

Selection of Regularization Parameters and Image Quality Metrics

For each type of regularization, we identified the best image that can be reconstructed by searching through a range of values for the regularization parameter within [0,max(Atb)]. Eleven different criteria in optical tomography have been compared in Correia et al.49 and the L-curve method was found to be optimal for finding the best Tikhonov regularization parameter in image deblurring problems. In our study, we employed two criteria: the volume ratio (VR),50 which is defined as the ratio of reconstructed target volume to true target volume and is related to the sparsity of the reconstructed targets, and the Dice similarity coefficient (Dice),51 which measures the location accuracy of the reconstructed objects. These criteria are sufficient to evaluate the sizes and positions of the targets. We also calculated the mean squared error (MSE) for the simulation study when ground truth was available, which measures the difference between reconstructed and true fluorophore concentrations. To assess image qualities, we also computed the contrast-to-noise ratio (CNR),52 which measures how well the reconstructed target can be distinguished from the background. The definitions of these metrics are as follows:

Eq. (17)

VR=|rROI||ROI|,

Eq. (18)

Dice=2*|rROIROI||rROI|+|ROI|,

Eq. (19)

MSE=1Nj=1N(xjx0j)2,

Eq. (20)

CNR=Mean(xROI)Mean(xROB)ωROIVar(xROI)+(1ωROI)Var(xROB).
where x and x0 are the reconstructed and true fluorophore concentrations, respectively, rROI is the reconstructed region of interest that is defined to be the voxels whose concentrations are higher than 50% of the maximum of reconstructed concentrations, ROI is the true region of interest or the true target locations, ROB is the true background region, ωROI=|ROI|/(|ROI|+|ROB|), and |·| is the number of elements. Generally, the VR and Dice are closer to 1, the smaller MSE, the larger CNR, the better. In this paper, we especially focus on the VR and Dice values since they measure sparsity and accuracy of target positions.

3.

Results

3.1.

Numerical Simulation

To validate our algorithm for FMT in small animals, we simulated a mouse by first obtaining the surface mesh of the Digimouse,53 then using Tetgen54 to regenerate a uniform internal mesh with a total of 32,332 nodes and 161,439 tetrahedral elements. We then simulated two capillary tubes at the center of the mouse trunk, each having a diameter of 2 mm and length 20 mm. For simplicity, we assigned the fluorophore concentration to be 1 for all the nodes inside the two tubes and 0 outside. We selected 60 internal nodes as laser source points, uniformly distributed on five rings around the trunk, and we set all the 4020 surface nodes that cover the trunk to be detectors. The simulated tissue optical properties were μa=0.007  mm1, μs=0.72  mm1 at both the excitation wavelength (650 nm) and the emission wavelength (700 nm). White Gaussian noise was added to the simulated measurement data, so that the signal-to-noise ratio (SNR) of the measurement data was 1.

For the reconstruction, we employed the L1 regularization since the targets are sparse. We started iterations from the same randomly picked uniform initial point x0 and reconstructed the fluorophore distribution using five different algorithms: (a) uniform update, (b) nonuniform update, (c) NUMOS with nOS=24, (d) fNUMOS with nOS=1, and (e) fNUMOS with nOS=24, so that we can see how each part of the proposed algorithm was making a difference toward a faster convergence, while maintaining a high-quality result. For the uniform update, we chose the regularization parameter λ1=6.5×104, and for the nonuniform update, we set the regularization parameter to be λ1=1×103 as before.17,26 The maximum number of iterations was set to be 2000.

In Fig. 1(a) and Table 1, we can see that even with 2000 iterations, the uniform method is still far away from obtaining a close-to-truth result, the reconstructed targets are very large (VR=6.84 and Dice=0.20), with very low image intensities, relatively low CNR, and large MSE. We see a clear boost of image qualities from Fig. 1(b) when we use the nonuniform update, i.e., NUMOS with nOS=1. From Table 1, we see that NUMOS1 used only 1310 iterations to produce a much better result with VR about 7× smaller, Dice about 3× higher, CNR 2× higher, and MSE about 2× smaller. When nonuniform update was combined with nOS=24 or the Nesterov’s technique, we see that less computational time was needed to obtain a similar quality image. In adddition, we see that fNUMOS1 was around 3× faster than NUMOS24, using approximately 35 and 107 s, respectively. This implies that the Nesterov’s momentum acceleration technique is more effective than the OS technique. Finally, when we combine both OS and Nesterov techniques with the nonuniform update, only 5 iterations and 10 s were needed to reach an image with high quality as shown in Fig. 1(e), which implies a speed up of about 10× than the NUMOS algorithm we previously studied.26 From the profile plots in Fig. 2, we can also verify that the quality of the reconstructed image from uniform method is poor, while images from the NUMOS and fNUMOS methods are much better.

Fig. 1

(Selected) Coronal slices of the reconstructed simulated mouse image from bottom to top, using (a) uniform method, Uniform1, at 2000 iterations, (b) NUMOS with nOS=1, at 1310 iterations, (c) NUMOS with nOS=24, at 53 iterations, (d) fNUMOS with nOS=1, at 121 iterations, and (e) fNUMOS with nOS=24 at 5 iterations. The truth image is shown in (f). Along the white dotted lines, profiles will be shown in Fig. 2.

JBO_21_1_016004_f001.png

Table 1

Comparison of VR, Dice, CNR, MSE, Time (s), and number of iterations for the FMT reconstruction of the simulated mouse using different algorithms.

MethodVRDiceCNRMSETime (sec)Iterations
Uniform6.840.204.832.94E3563.462000
NUMOS11.010.589.941.74E3491.281310
NUMOS241.010.589.811.80E3106.9753
fNUMOS11.020.599.541.69E335.13121
fNUMOS241.010.5910.271.70E310.025

Fig. 2

Profiles of the reconstructed FMT images along white lines in Fig. 1 for the numerical simulation case.

JBO_21_1_016004_f002.png

3.2.

Phantom Experiment

Next we used a set of data from cubic phantom experiments to validate the acceleration effects of our proposed fNUMOS method for FMT. The cubic phantom was of dimension 32  mm×32  mm×29  mm and was composed of 1% intralipid, 2% agar, and water in the background. We inserted two capillary tubes with length 12 mm and diameter 1 mm as targets, in which both 6.5 mm DiD (D307, Invitrogen corporation) fluorescence dye solution and uniformly distributed [F]18-fluoro-2-deoxy-d-glucose (FDG) at activity level of 100 μCi were injected. The cubic surface was extracted first and then the FEM mesh was generated, consisting of 8690 nodes and 47,581 tetrahedral elements.5,55 For the optical imaging, laser at a wavelength of 650 nm scanned the front surface of the phantom at 20 illumination nodes. Measurements were collected at 1057 detector nodes by using a conical mirror system and a CCD camera.4,17 The filtered excitation wavelength was at 700 nm. The tissue optical properties were μa=0.0022  mm1, μs=1.10  mm1 at both 650- and 700-nm wavelengths. Details of the simultaneous positron emission tomography (PET) imaging can be found in Li et al.4,5,55 We thresholded the PET images at 20% of the maximum FDG concentrations to identify the positions of the capillary tubes.

For the regularized reconstruction, we also compared the results from the five different algorithms (as illustrated in Fig. 3) and we empirically chose the L1 regularization parameter λ1 to be 9×103 for the uniform methods and 5×103 for the nonuniform algorithms. We set the maximum number of iterations to be Nmax=768 for nonuniform algorithms; whereas for the uniform cases, we allowed up to 5000 iterations for better results. We computed the VR, Dice, and CNR for the reconstructed images. True locations of targets were obtained from the PET image that was acquired simultaneously. Note that the associated intensity information of the true image was not available so we could not compute the MSE here. From Table 2, we see that when we chose nOS=24, fNUMOS reached a result with VR of 1.02, Dice of 0.42, and CNR of 8.64 after four iterations, taking only 0.63 s. To obtain an image of similar quality with the Nesterov’s momentum technique only, fNUMOS for nOS=1 took 84 iterations and 1.34 s; with the OS acceleration alone, NUMOS for nOS=24 took 32 iterations and 4.85 s; without any of those two acceleration techniques, NUMOS for nOS=1 took 640 iterations and 10.42 s. So for the phantom experiment, fNUMOS can be about 8× faster than the NUMOS we proposed before.26 Similar to the simulation results, we can also verify from the profile plots in Fig. 4 that the quality of the reconstructed image from uniform method is poor, while from the NUMOS and fNUMOS methods is much better.

Fig. 3

Each slice corresponds to a coronary section of the reconstructed cubic phantom image from bottom to top, using (a) uniform method, Uniform1, at 5000 iterations, (b) NUMOS with nOS=1, at 640 iterations, (c) NUMOS with nOS=24, at 32 iterations, (d) fNUMOS with nOS=1, at 84 iterations, and (e) fNUMOS with nOS=24 at 4 iterations. The truth image is shown in (f). Along the white dotted lines, profiles will be shown in Fig. 4.

JBO_21_1_016004_f003.png

Table 2

Comparison of VR, Dice, CNR, Time (s), and number of iterations for the FMT reconstruction of cubic phantom using different algorithms.

MethodVRDiceCNRTime (sec)Iterations
Uniform2.500.336.6278.255000
NUMOS11.130.417.9310.42640
NUMOS241.020.398.204.8532
fNUMOS11.060.418.081.3484
fNUMOS241.020.428.640.634

Fig. 4

Normalized profiles of the reconstructed FMT images along white dotted lines in Fig. 3 for the phantom experiment.

JBO_21_1_016004_f004.png

4.

Discussion and Conclusion

In this paper, we proposed fNUMOS, aiming to solve FMT with L1 regularization at a very fast speed while maintaining the reconstruction accuracy by combining Nesterov’s momentum technique into a nonuniform updating scheme with acceleration from the OS method (with a relatively small number of OS). Our iterative updating scheme for all reconstruction methods we described in Sec. 2 was based on a separable MM framework, which allowed for a highly vectorized code that was automatically parallelized by MATLAB 2013b. We conducted our computations on a dual Intel Xeon E5-2680 v2 CPU workstation with 20 cores and 128 GB memory. Using both numerical simulation and phantom data, we compared fNUMOS with other algorithms. We found that fNUMOS performed best in the sense that reconstructed targets were more localized [smaller VR, higher Dice, and smaller MSE (only for simulation case)] than those from other methods. fNUMOS was found to be robust for high noise levels (i.e., SNR=1) and had a significantly faster speed of convergence. In our numerical simulation study, we found a significant speed gain from the momentum technique at about 10 times in comparison with the NUMOS algorithm we studied before.26 For data of the cubic phantom experiment from a smaller system matrix, the speed gain was about eight times. This implies that fNUMOS has great potential in obtaining higher spatial resolution when a finer mesh is used.

For a thorough comparison on how the image quality metrics, VR and Dice, change according to the iteration time for uniform and nonuniform algorithms with or without the acceleration techniques of momentum or order subsets, we evaluated all eight scenarios in two groups: (1) Uniform1, f+Uniform1, NUMOS1, and fNUMOS1, and (2) Uniform24, f+Uniform24, NUMOS24, and fNUMOS24. Results for the simulated mouse data are shown in Figs. 5 and 6 for nOS=1 and nOS=24, respectively. In both figures, we clearly see that nonuniform update, NUMOS1, performs much better than the uniform method, Uniform1, (blue/star versus magenta/sphere curves), and it is even better than Uniform accelerated by the momentum technique, f+Uniform1, (blue/star versus green/diamond curves). Across Figs. 5 and 6, we can reaffirm that the momentum technique is faster than the OS technique in accelerating the convergence (red/plus curve in Fig. 5 versus blue/star curve in Fig. 6). Finally when both acceleration techniques are combined together with nonuniform update, the proposed fNUMOS24 can achieve a high-quality reconstruction using the shortest amount of time. For the phantom experimental data, the results from comparison are plotted in Figs. 7 and 8. We see similar trends as in the simulated case and fNUMOS24 again performs the best, producing a high-quality image within a second.

Fig. 5

Comparison of uniform and nonuniform updates with or without the acceleration of Nesterov’s momentum (abbreviated as “f”), when the number of ordered subsets was fixed to be 1 for the simulated mouse.

JBO_21_1_016004_f005.png

Fig. 6

Comparison of uniform and nonuniform updates with or without the acceleration of Nesterov’s momentum, when the number of ordered subsets was fixed to be 24 for the simulated mouse.

JBO_21_1_016004_f006.png

Fig. 7

Comparison of uniform and nonuniform updates with or without the acceleration of Nesterov’s momentum, when the number of ordered subsets was fixed to be 1 for the cubic phantom experiment.

JBO_21_1_016004_f007.png

Fig. 8

Comparison of uniform and nonuniform updates with or without the acceleration of Nesterov’s momentum, when the number of ordered subsets was fixed to be 24 for the cubic phantom experiment.

JBO_21_1_016004_f008.png

For the popular FISTA algorithm that was adopted in FMT,11 we already compared it with the NUMOS1 algorithm that we proposed in our previous work using data from phantom experiments.26 We found that excluding the computational time for the largest eigenvalue of the system matrix for the L needed in Eq. (16), FISTA was still slower than NUMOS and the final reconstruction results were not as good either.26 For the numerically simulated mouse with a large system matrix, we were able to compute the largest eigenvalue with our workstation, which alone cost about 30 min. We also explored the backtracking version FISTA29 to estimate the L and we found that it tended to reach a large L value that slowed down the convergence. As we pointed out in Sec. 2.6, the slower convergence and poorer image quality are due to the fact that choosing of L is essentially assigning a uniform updating step size for each location, thus it is not surprising that FISTA was not as efficient as our nonuniform methods, including the earlier NUMOS and the newly proposed fNUMOS.

We noticed that the OS method (nOS=24) had limited capability increasing the speed of nonuniform type algorithms by around four times for the numerically simulated mouse and around two times for the cubic phantom. That limit in speed increase is mainly due to the choice of OS being random at every iteration, which greatly increased the total cost. We also see that for experimental data of smaller size, the overhead cost from randomization will take a larger portion of the total computational time, so that the speed up is not as significant as for the larger simulated data. It would be of great interest to identify an optimal deterministic way of selecting the OS, so that we may potentially obtain a speed increase closer to the ideal nOS times.

On the choice of the best L1 regularization parameter, we swept through a range of values within [0,max(Atb)]. We realized that the L1 regularization parameter can actually have a range where images of similar qualities (in terms of VR, Dice, and CNR) can be reconstructed, although the corresponding numbers of iterations needed will vary accordingly. Due to the significant reduction of computational time, we may further investigate the optimal way to determine an appropriate regularization parameter.

For the stopping criteria for our iterative algorithms, we found that since the convergence speed of different algorithms varies, the stopping criteria need to be selected differently. In addition, our OS is chosen in a randomized fashion, the errors in between consecutive iterations also possess randomness, so no simple stopping criteria exist. We plan to further investigate the appropriate stopping criterion for each algorithm in the future, especially when we can identify a deterministic selection of OS.

In summary, we have investigated in this paper the accelerating effects of a Nesterov-type momentum technique on the nonuniform updating scheme with or without the OS method. We have obtained high-quality images using around 10× less time with the proposed new method than some of the existing cutting-edge methods. Our next step will be applying the proposed new method to in vivo experiments.

Acknowledgments

We would like to thank the support from University of California, Merced, Startup Fund.

References

1. 

S. R. Cherry, “In vivo molecular and genomic imaging: new challenges for imaging physics,” Phys. Med. Biol., 49 (3), R13 (2004). http://dx.doi.org/10.1088/0031-9155/49/3/R01 PHMBA7 0031-9155 Google Scholar

2. 

V. Ntziachristos et al., “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol., 23 (3), 313 –320 (2005). http://dx.doi.org/10.1038/nbt1074 NABIF9 1087-0156 Google Scholar

3. 

S. Ahn et al., “Fast iterative image reconstruction methods for fully 3d multispectral bioluminescence tomography,” Phys. Med. Biol., 53 (14), 3921 (2008). http://dx.doi.org/10.1088/0031-9155/53/14/013 PHMBA7 0031-9155 Google Scholar

4. 

C. Li et al., “A three-dimensional multispectral fluorescence optical tomography imaging system for small animals based on a conical mirror design,” Opt. Express, 17 (9), 7571 –7585 (2009). http://dx.doi.org/10.1364/OE.17.007571 OPEXFF 1094-4087 Google Scholar

5. 

C. Li et al., “Simultaneous pet and multispectral 3-dimensional fluorescence optical tomography imaging system,” J. Nucl. Med., 52 (8), 1268 –1275 (2011). http://dx.doi.org/10.2967/jnumed.110.082859 JNMEAQ 0161-5505 Google Scholar

6. 

V. Lukic, V. A. Markel and J. C. Schotland, “Optical tomography with structured illumination,” Opt. Lett., 34 (7), 983 –985 (2009). http://dx.doi.org/10.1364/OL.34.000983 OPLEDP 0146-9592 Google Scholar

7. 

J. Dutta et al., “Illumination pattern optimization for fluorescence tomography: theory and simulation studies,” Phys. Med. Biol., 55 (10), 2961 (2010). http://dx.doi.org/10.1088/0031-9155/55/10/011 PHMBA7 0031-9155 Google Scholar

8. 

N. Ducros et al., “Fluorescence molecular tomography of an animal model using structured light rotating view acquisition,” J. Biomed. Opt., 18 (2), 020503 (2013). http://dx.doi.org/10.1117/1.JBO.18.2.020503 JBOPFO 1083-3668 Google Scholar

9. 

S. R. Arridge, M. Schweiger, “Inverse methods for optical tomography,” Information Processing in Medical Imaging, 259 –277 Springer, Berlin Heidelberg (1993). Google Scholar

10. 

D. Han et al., “A fast reconstruction algorithm for fluorescence molecular tomography with sparsity regularization,” Opt. Express, 18 (8), 8630 –8646 (2010). http://dx.doi.org/10.1364/OE.18.008630 OPEXFF 1094-4087 Google Scholar

11. 

D. Han et al., “A fast reconstruction method for fluorescence molecular tomography based on improved iterated shrinkage,” Proc. SPIE, 7965 79651C (2011). http://dx.doi.org/10.1117/12.878051 PSISDG 0277-786X Google Scholar

12. 

J. C. Baritaux, K. Hassler and M. Unser, “An efficient numerical method for general lp regularization in fluorescence molecular tomography,” IEEE Trans. Med. Imaging, 29 (4), 1075 –1087 (2010). http://dx.doi.org/10.1109/TMI.2010.2042814 ITMID4 0278-0062 Google Scholar

13. 

A. Behrooz et al., “Total variation regularization for 3d reconstruction in fluorescence tomography: experimental phantom studies,” Appl. Opt., 51 (34), 8216 –8227 (2012). http://dx.doi.org/10.1364/AO.51.008216 APOPAI 0003-6935 Google Scholar

14. 

J. Dutta et al., “Joint l1 and total variation regularization for fluorescence molecular tomography,” Phys. Med. Biol., 57 (6), 1459 (2012). http://dx.doi.org/10.1088/0031-9155/57/6/1459 PHMBA7 0031-9155 Google Scholar

15. 

J. Shi et al., “Efficient l1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient,” Opt. Lett., 38 (18), 3696 –3699 (2013). http://dx.doi.org/10.1364/OL.38.003696 OPLEDP 0146-9592 Google Scholar

16. 

J. Shi et al., “Enhanced spatial resolution in fluorescence molecular tomography using restarted l1-regularized nonlinear conjugate gradient algorithm,” J. Biomed. Opt., 19 (4), 046018 (2014). http://dx.doi.org/10.1117/1.JBO.19.4.046018 JBOPFO 1083-3668 Google Scholar

17. 

D. Zhu and C. Li, “Nonconvex regularizations in fluorescence molecular tomography for sparsity enhancement,” Phys. Med. Biol., 59 (12), 2901 –2912 (2014). http://dx.doi.org/10.1088/0031-9155/59/12/2901 PHMBA7 0031-9155 Google Scholar

18. 

J. Ripoll, “Hybrid Fourier-real space method for diffuse optical tomography,” Opt. Lett., 35 (5), 688 –690 (2010). http://dx.doi.org/10.1364/OL.35.000688 OPLEDP 0146-9592 Google Scholar

19. 

N. Ducros et al., “Full-wavelet approach for fluorescence diffuse optical tomography with structured illumination,” Opt. Lett., 35 (21), 3676 –3678 (2010). http://dx.doi.org/10.1364/OL.35.003676 OPLEDP 0146-9592 Google Scholar

20. 

T. Correia et al., “Wavelet-based data and solution compression for efficient image reconstruction in fluorescence diffuse optical tomography,” J. Biomed. Opt., 18 (8), 086008 (2013). http://dx.doi.org/10.1117/1.JBO.18.8.086008 JBOPFO 1083-3668 Google Scholar

21. 

J. Shi et al., “Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization,” J. Biomed. Opt., 20 (5), 055004 (2015). http://dx.doi.org/10.1117/1.JBO.20.5.055004 JBOPFO 1083-3668 Google Scholar

22. 

X. Cao et al., “Accelerated image reconstruction in fluorescence molecular tomography using dimension reduction,” Biomed. Opt. Express, 4 (1), 1 –14 (2013). http://dx.doi.org/10.1364/BOE.4.000001 BOEICL 2156-7085 Google Scholar

23. 

H. M. Hudson and R. S. Larkin, “Accelerated image reconstruction using ordered subsets of projection data,” IEEE Trans. Med. Imaging, 13 (4), 601 –609 (1994). http://dx.doi.org/10.1109/42.363108 ITMID4 0278-0062 Google Scholar

24. 

D. Zhu et al., “Comparison of regularization methods in fluorescence molecular tomography,” Photonics, 1 (2), 95 –109 (2014). http://dx.doi.org/10.3390/photonics1020095 Google Scholar

25. 

D. Zhu and C. Li, “Nonuniform update for sparse target recovery in fluorescence molecular tomography accelerated by ordered subsets,” Biomed. Opt. Express, 5 (12), 4249 –4259 (2014). http://dx.doi.org/10.1364/BOE.5.004249 BOEICL 2156-7085 Google Scholar

26. 

K. Lange, “The MM algorithm,” Optimization, 185 –219 Springer(2013). Google Scholar

27. 

Y. Nesterov, “A method of solving a convex programming problem with convergence rate o (1/k2),” Sov. Math. Dokl., 27 (2), 372 –376 (1983). http://dx.doi.org/10.1007/s10107-004-0552-5 0197-6788 Google Scholar

28. 

Y. Nesterov, “Smooth minimization of non-smooth functions,” Math. Program., 103 (1), 127 –152 (2005). http://dx.doi.org/10.1007/s10107-004-0552-5 MHPGA4 1436-4646 Google Scholar

29. 

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci., 2 (1), 183 –202 (2009). http://dx.doi.org/10.1137/080716542 Google Scholar

30. 

D. Kim et al., “Combining ordered subsets and momentum for accelerated x-ray CT image reconstruction,” IEEE Trans. Med. Imaging, 34 (1), 167 –178 (2015). http://dx.doi.org/10.1109/TMI.2014.2350962 ITMID4 0278-0062 Google Scholar

31. 

D. Zhu and C. Li, “Accelerating spatially non-uniform update for sparse target recovery in fluorescence molecular tomography by ordered subsets and momentum methods,” Proc. SPIE, 9319 93190U (2015). http://dx.doi.org/10.1117/12.2076789 PSISDG 0277-786X Google Scholar

32. 

F. Fedele, J. Laible and M. Eppstein, “Coupled complex adjoint sensitivities for frequency-domain fluorescence tomography: theory and vectorized implementation,” J. Comput. Phys., 187 (2), 597 –619 (2003). http://dx.doi.org/10.1016/S0021-9991(03)00150-5 JCTPAH 0021-9991 Google Scholar

33. 

D. R. Hunter and K. Lange, “A tutorial on MM algorithms,” Am. Stat., 58 (1), 30 –37 (2004). http://dx.doi.org/10.1198/0003130042836 ASTAAJ 0003-1305 Google Scholar

34. 

I. Daubechies, M. Defrise and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Commun. Pure Appl. Math., 57 (11), 1413 –1457 (2004). http://dx.doi.org/10.1002/cpa.20042 CPMAMV 0010-3640 Google Scholar

35. 

E. J. Candes, M. B. Wakin and S. P. Boyd, “Enhancing sparsity by reweighted l1 minimization,” J. Fourier Anal. Appl., 14 (5–6), 877 –905 (2008). http://dx.doi.org/10.1007/s00041-008-9045-x Google Scholar

36. 

I. Daubechies et al., “Iteratively reweighted least squares minimization for sparse recovery,” Commun. Pure Appl. Math., 63 (1), 1 –38 (2010). http://dx.doi.org/10.1002/cpa.20303 CPMAMV 0010-3640 Google Scholar

37. 

J. A. Fessler et al., “Grouped-coordinate ascent algorithms for penalized-likelihood transmission image reconstruction,” IEEE Trans. Med. Imaging, 16 (2), 166 –175 (1997). http://dx.doi.org/10.1109/42.563662 ITMID4 0278-0062 Google Scholar

38. 

H. Erdogan and J. A. Fessler, “Ordered subsets algorithms for transmission tomography,” Phys. Med. Biol., 44 (11), 2835 (1999). http://dx.doi.org/10.1088/0031-9155/44/11/311 PHMBA7 0031-9155 Google Scholar

39. 

A. R. de Pierro, “A modified expectation maximization algorithm for penalized likelihood estimation in emission tomography,” IEEE Trans. Med. Imaging, 14 (1), 132 (1995). http://dx.doi.org/10.1109/42.370409 Google Scholar

40. 

D. Kim et al., “Accelerating ordered subsets image reconstruction for x-ray CT using spatially nonuniform optimization transfer,” IEEE Trans. Med. Imaging, 32 (11), 1965 –1978 (2013). http://dx.doi.org/10.1109/TMI.2013.2266898 ITMID4 0278-0062 Google Scholar

41. 

Z. Yu et al., “Fast model-based x-ray CT reconstruction using spatially nonhomogeneous ICD optimization,” IEEE Trans. Image Process., 20 (1), 161 –175 (2011). http://dx.doi.org/10.1109/TIP.2010.2058811 IIPRE4 1057-7149 Google Scholar

42. 

A. R. de Pierro, “On the relation between the ISRA and the EM algorithm for positron emission tomography,” IEEE Trans. Med. Imaging, 12 (2), 328 –333 (1993). http://dx.doi.org/10.1109/42.232263 ITMID4 0278-0062 Google Scholar

43. 

Z.-Q. Luo, “On the convergence of the lms algorithm with adaptive learning rate for linear feedforward networks,” Neural Comput., 3 (2), 226 –245 (1991). http://dx.doi.org/10.1162/neco.1991.3.2.226 NEUCEB 0899-7667 Google Scholar

44. 

J. Browne and A. R. de Pierro, “A row-action alternative to the EM algorithm for maximizing likelihood in emission tomography,” IEEE Trans. Med. Imaging, 15 (5), 687 –699 (1996). http://dx.doi.org/10.1109/42.538946 ITMID4 0278-0062 Google Scholar

45. 

S. Ahn et al., “Globally convergent image reconstruction for emission tomography using relaxed ordered subsets algorithms,” IEEE Trans. Med. Imaging, 22 (5), 613 –626 (2003). http://dx.doi.org/10.1109/TMI.2003.812251 ITMID4 0278-0062 Google Scholar

46. 

S. Ahn et al., “Convergent incremental optimization transfer algorithms: application to tomography,” IEEE Trans. Med. Imaging, 25 (3), 283 –296 (2006). http://dx.doi.org/10.1109/TMI.2005.862740 ITMID4 0278-0062 Google Scholar

47. 

S. Becker, J. Bobin and E. J. Candès, “Nesta: a fast and accurate first-order method for sparse recovery,” SIAM J. Imaging Sci., 4 (1), 1 –39 (2011). http://dx.doi.org/10.1137/090756855 Google Scholar

48. 

P. Tseng, “On accelerated proximal gradient methods for convex-concave optimization,” SIAM J. Optim., (2008). Google Scholar

49. 

T. Correia et al., “Selection of regularization parameter for optical topography,” J. Biomed. Opt., 14 (3), 034044 (2009). http://dx.doi.org/10.1117/1.3156839 JBOPFO 1083-3668 Google Scholar

50. 

F. Tian, G. Alexandrakis and H. Liu, “Optimization of probe geometry for diffuse optical brain imaging based on measurement density and distribution,” Appl. Opt., 48 (13), 2496 –2504 (2009). http://dx.doi.org/10.1364/AO.48.002496 APOPAI 0003-6935 Google Scholar

51. 

L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology, 26 (3), 297 –302 (1945). http://dx.doi.org/10.2307/1932409 ECGYAQ 0094-6621 Google Scholar

52. 

X. Song et al., “Automated region detection based on the contrast-to-noise ratio in near-infrared tomography,” Appl. Opt., 43 (5), 1053 –1062 (2004). http://dx.doi.org/10.1364/AO.43.001053 APOPAI 0003-6935 Google Scholar

53. 

B. Dogdas et al., “Digimouse: a 3d whole body mouse atlas from ct and cryosection data,” Phys. Med. Biol., 52 (3), 577 (2007). http://dx.doi.org/10.1088/0031-9155/52/3/003 PHMBA7 0031-9155 Google Scholar

54. 

H. Si, “TetGen, a delaunay-based quality tetrahedral mesh generator,” ACM Trans. Math. Softw., 41 (2), 11 (2015). http://dx.doi.org/10.1145/2629697 Google Scholar

55. 

C. Li et al., “Three-dimensional fluorescence optical tomography in small-animal imaging using simultaneous positron-emission-tomography priors,” Opt. Lett., 34 (19), 2933 –2935 (2009). http://dx.doi.org/10.1364/OL.34.002933 OPLEDP 0146-9592 Google Scholar

Biographies for the authors are not available.

© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2016/$25.00 © 2016 SPIE
Dianwen Zhu and Changqing Li "Accelerated image reconstruction in fluorescence molecular tomography using a nonuniform updating scheme with momentum and ordered subsets methods," Journal of Biomedical Optics 21(1), 016004 (13 January 2016). https://doi.org/10.1117/1.JBO.21.1.016004
Published: 13 January 2016
Lens.org Logo
CITATIONS
Cited by 21 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Reconstruction algorithms

Luminescence

Tomography

Image quality

Fluorescence tomography

Image restoration

Lithium

Back to Top