## 1.

## Introduction

Optical tomography has drawn significant attention in recent years due to its operational simplicity and the rich contrast offered, especially when employing targeted fluorochromes. Fluorescence molecular tomography (FMT) in particular has been shown to be capable of resolving highly versatile cellular and subcellular contrast in whole animals^{1, 2} *in vivo* and noninvasively. There have been significant technological developments in FMT methods, especially associated with
$360\text{-}\mathrm{deg}$
projection free-space techniques that avoid the use of matching fluids,^{3, 4, 5, 6} the use of charge-coupled device (CCD) cameras for high spatial sampling of data fields,^{7} and the development of fast and tomographic algorithms to impart quantitative 3-D imaging.^{8, 9, 10, 11, 12} In addition, the use of early photons has further shown imaging improvements over constant intensity illumination data. These developments essentially bring out the full potential of stand-alone diffuse optical tomography methods.

The use of image priors has been also considered for further improving the performance of the optical tomography reconstruction over stand-alone systems.^{13, 14, 15, 16, 17} A common approach is the utilization of anatomical information for the construction of a more accurate solution to the forward problem, or the regularization of the ill-posed inverse problem, resulting in improved image fidelity and resolution. To capitalize on the improvements that are offered by the use of image priors, there has been recent interest in the development of hybrid imaging systems.^{18, 19, 20, 21, 22, 23, 24} Our group has recently developed a fully integrated FMT x-ray computed tomography (XCT) scanner, where all optical and CT components are mounted on a common gantry.^{25} This modality provides accurately registered CT data that can be used to improve FMT image quality. A particular requirement that in consequence arose is the segmentation of the CT data to identify different organs or structures in the tissue imaged. This is important for three main reasons. The identification of different structures and their corresponding interfaces allows the generation of more accurate numerical meshes for the optical tomography problem. Importantly, they also allow for the assignment of optical properties, based on the knowledge of the optical properties of the organ or structure segmented, since there is no direct relation between x-ray CT images and optical attenuation. Finally, the resolved structures can then be used to guide the inversion scheme utilized, as further explained in the methods.

We therefore considered an automatic segmentation scheme for streamlining the FMT-XCT inversion. Several approaches have been suggested in the past for automated segmentation of medical CT images.^{26, 27, 28, 29, 30} However, the segmentation and subsequent utilization of the results into the FMT-XCT code required different image processing approaches compared to published methods for medical CT data. The differences can be attributed related to the use of
$\mu \mathrm{CT}$
data, i.e., data of varying noise levels and reduced image contrast between organs compared to clinical CT data.

In addition, the work here considers an automatic integration scheme of segmented data into the FMT inversion scheme. Particular attention has been given to obtaining efficient computation to reach fast inversion times. For this reason, attention was given to the use of low dimensional spaces and adaptive parameter definition that can be solved using minimum computing requirements in terms of memory and CPU time.

In the following, we introduce the framework developed, examined for segmentation in the torso, as it relates to the study of lung disease. We present the segmentation tools employed, their performance with experimental mouse images, and the consequent integration of the results into a finite-element method (FEM)-based FMT inversion code.

## 2.

## Automatic Detection of Anatomical Structures

Automatic detection of specific structures has been of great interest in medical imaging fields. Different approaches have been developed in the last few decades for image segmentation. Typically, the solutions presented work optimally for a particular set of problems and cannot be generalized for any segmentation specification. We consider segmentation of three major structures in the mouse torso, i.e., skeletal tissue, lung, and heart. The image data were taken using a commercial micro-CT^{25} with a tube voltage of
$80\phantom{\rule{0.3em}{0ex}}\mathrm{kV}$
and an electric current of
$450\phantom{\rule{0.3em}{0ex}}\mu \mathrm{A}$
. The selection of the torso was driven by an elevated interest to study lung cancer and lung inflammatory diseases such as asthma and COPD associated with pharmacological studies. We found that each tissue required different image processing steps for optimal segmentation, as described in the following.

## 2.1.

### Bone Segmentation

Since bone structures exhibit high contrast on CT images, they can be easily identified with a conventional application of a threshold, which conveniently is also a fast operation. To automatically assign a threshold, we examined the histogram of the intensities of the CT volume data. When dealing with a normalized scale like the Hounsfield scale, it is straightforward to select a certain threshold that divides the image in bone and background. However, in this work we make no assumption on the CT data scaling so that the method can work seamlessly with different CT acquisition parameters and datasets, since in small animal imaging there exists less standardization between the data obtained, compared to clinical data. The analysis of many histograms of our CT data had shown that there are no significant features representing the intensity of bone tissue, like a local maxima or minima, which could be traced. The intensity of the bones is usually widely distributed throughout the histogram. Therefore we approximated our threshold ${T}_{b}$ by finding other distinct intensities, and assume that there is a linear relationship between those intensities and the threshold we need. Mathematically, this can be described by

with ${I}_{1}$ and ${I}_{2}$ being the reference intensity points and $w$ being a factor for weighting the distance between those points.When considering a typical histogram of a mouse CT, two distinct peaks can be noticed that could be used as reference points (Fig. 1 ). The highest peak can be found at the left side of the histogram and corresponds to voxels of very low density, in this case primarily the voxels corresponding to the air in the field of view surrounding the animal. A second significant peak, corresponding to water, can also be easily identified as a second maximum in the histogram. Soft tissue contains high amounts of water, and the area around that peak essentially indicates voxels corresponding to soft tissue. These two peaks can be employed for approximating the optimal threshold in Eq. 1. To determine the scaling factor $w$ in Eq. 1, we considered the Hounsfield scale, since it is a common standard for CT images. In the Hounsfield scale, air has an intensity of $-1000$ Hounsfield units (HU), water has an intensity of $0\phantom{\rule{0.3em}{0ex}}\mathrm{HU}$ , and bone structures start at $400\phantom{\rule{0.3em}{0ex}}\mathrm{HU}$ , i.e., 0.4 of the water-air difference. Thus in the Hounsfield scale, the threshold for bone structures ${T}_{b}$ at $400\phantom{\rule{0.3em}{0ex}}\mathrm{HU}$ can be determined by rewriting Eq. 1 as

with ${I}_{\text{air}}$ and ${I}_{\text{water}}$ being the intensities of air and water in HU. This equation was utilized to compute the threshold in CT volume data with arbitrary units. The respective peaks representing the intensities of air and water can be determined by simple maxima detection in the according histogram. Using this threshold, the CT images were converted to binary for subsequent processing as described in the following.## 2.2.

### Ribcage Detection

For segmentation of lung and heart, we considered first the identification of orientation points to serve as initial points for subsequent segmentation steps. In this role, the ribcage serves as an easily identifiable structure that accurately delineates a big part of the outer surfaces of the lung and heart. To identify the ribcage, we analyzed the result of the bone segmentation by computing a histogram of the number of segmented bone voxels per axial slice. We treated the histogram as a signal ${h}_{a}\left(t\right)$ , where $t$ is the slice number. Within this signal the ribcage creates a distinct harmonic frequency [Fig. 2 ]. To detect the periodicity, we opted for the use of a Gabor filter; essentially a Gaussian function multiplied with a cosine, i.e.,

## Eq. 3

$${g}_{\sigma ,\lambda}\left(t\right)=\mathrm{exp}(-\frac{{t}^{2}}{2{\sigma}^{2}})\cdot \mathrm{cos}\left(2\pi \frac{t}{\lambda}\right),$$This approach is similar to template matching, where the Gabor function describes the periodic oscillation of the ribcage. Since the frequency of the Gabor filter needs to match the frequency the ribcage produces in ${h}_{a}\left(t\right)$ , specific values for $\sigma $ and $\lambda $ had to be defined. To determine those values, we analyzed the frequency produced by the ribcage in three training datasets and adjusted $\sigma $ and $\lambda $ so that the Gabor filter fitted this frequency. When performing the ribcage detection on unknown test data, we used the differences of the voxel spacing and voxel size in the training and test data sets to compute a scaling factor for $\sigma $ and $\lambda $ . Thus the procedure is independent from scaled image data. Inherent in this procedure is the assumption that the size of the imaged mice does not vary significantly, and that the ribs have a distinct separation to each other. To apply the filter, we convoluted the histogram signal ${h}_{a}\left(t\right)$ with the Gabor filter ${g}_{\sigma ,\lambda}\left(t\right)$ . The result of the convolution ${h}_{a}{\left(t\right)}^{*}$ ${g}_{\sigma ,\lambda}\left(t\right)$ is a filter response that usually had its global maximum near the axial center of the ribcage [Fig. 2]. Furthermore, we used just the local maxima of the filter response to interpolate a new function [also Fig. 2]. In this function, the next minima to the left and right sides of the global maximum (the center of the ribcage) were selected as landmark points that defined a bounding box in the axial direction around the ribcage. Those landmarks do not necessarily mark specific anatomical points, but usually they appear near the top and bottom endings of the sternum.

To define the bounding box also in sagittal and coronal directions, we computed the histograms ${h}_{s}\left(t\right)$ and ${h}_{c}\left(t\right)$ indicative of the number of segmented bone voxels in these directions. Note that we only used the slices between the axial landmark points to compute those histograms. In the histograms, we searched for the global maxima and the first slices left and right to them, where ${h}_{s}\left(t\right)$ and ${h}_{c}\left(t\right)$ respectively equal zero, that is, were the ribcage ends. We confined our bounding box only around the ribcage and excluded artifacts outside the mouse that sometimes occur. If this detection scheme experiences difficulties due to noise and artifacts, it is possible to employ searches that define areas where ${h}_{s}\left(t\right)$ and ${h}_{c}\left(t\right)$ become smaller than a predetermined value greater than zero to get a bounding box tight around the ribcage. Overall, ribcage determination is an essential step for further detection of anatomic structures, as described in the following.

## 2.3.

### Lung Segmentation

To enable lung segmentation, we utilized a seed growing algorithm within the confines of the detected ribcage. For this purpose, possible seed points inside the lungs had to be found automatically. The whole respiratory system can be naturally recognized on XCT images by means of its low density and the corresponding high contrast to surrounding tissue. However, image intensity and contrast alone did not suffice for accurate detection. This was because the bounding box of the seed growing algorithm was usually still wide enough to occasionally contain regions of air outside the mouse or parts of the digestive system that showed also a low intensity and contrast. Other challenges involved the blurring of borders due to possible moving artifacts. To refine the region of interest and achieve a correct segmentation result, we created a spherical region of interest (SROI) inside the bounding box. The SROI was initialized as a sphere with a radius of zero at the center of the bounding box [see Fig. 3 ], and was allowed to grow so that the sphere radius measured 90% of the distance between the center of the sphere and the boundary of the mouse. In Fig. 3 a CT image is displayed with the according bounding box (solid blue line) and the SROI (bright circle).

To find seed points inside the SROI, we computed an intensity histogram from all the voxels inside the SROI [Fig. 3]. Here, the voxels with the lowest intensity ${I}_{\text{low}}$ mark the dark bronchial tubes, and the high peak ${I}_{\text{peak}}$ marks soft tissue. We took these easy to detect points as references to compute an interval $[{I}_{1},{I}_{2}]$ , where

andwhich represents the intensity of voxels that by consequence belong to lung tissue. The parameters of Eqs. 4, 5 were roughly determined empirically using about five datasets. All voxels of the SROI that possessed this intensity were considered possible seed points for the seed growing algorithm. In Fig. 3, these seed point possibilities are marked green.For the seed growing algorithm itself, a mean intensity ${\overline{I}}_{a}$ was defined using the chosen seed point $s$ and its neighboring voxels. Also, a confidence interval was defined by

with $\sigma $ representing the intensity’s standard deviation of the seed point and its neighboring voxels, and $m$ serving as a multiplier to manually control the width of the interval. The algorithm iteratively searched for all voxels that had intensities within the confidence interval and that were connected to the seed point or an already segmented voxel. To avoid oversegmentation, we chose $m$ to be very small. To compensate the resulting undersegmentation, we used multiple, randomly chosen seed points, thereby computing multiple segmentations and combining them. Since the algorithm is sensitive to noise, we smoothed the result by using a Gaussian filter, thereby interpolating small gaps and holes. Finally, we rebinarized the image using a threshold filter.Because we also wanted to be sure that both the right and left lobes of lung are segmented, we chose an equal number of seed points from both. We distinguished between the right and left lobe by simply dividing our bounding box in the middle.

## 2.4.

### Heart Position Approximation and Segmentation

The last procedure of our framework is the approximation of the heart position and its segmentation. For this purpose we propose the use of a shape model of the heart generated from manually segmented training data. In this work we used one manually segmented volume dataset to gain this model. The model is a closed mesh that consists of a number of vertices connected through edges. To yield a rough initial position and scaling factor for the model, we used the bounding box from the ribcage detection as reference. Using the training dataset, we examined the heart position relative to the borders of the bounding box by considering the box as a normalized cube with a side length of 1. We initialized the heart model at the same position in bounding boxes of other CT volume images. A scaling factor for controlling the size of the heart model was approximated using the sizes of the bounding boxes around the ribcages of training and test datasets as references. Scaling factors were computed for all three directions and averaged to get the main scaling factor. This averaging results in a more robust scaling, for our experiments had shown that the sizes of the bounding boxes varied enough to receive unusual heart shapes.

This operation generally placed the heart model close to its supposed position. However, it also attained regions where the heart model was overlapping the other segmentations of the lung and bone structures. This is because of the rough initial position and scale approximation. To adjust the model position, we searched for all of the overlapping voxels and created for each one a unit vector that points to the center of gravity of the heart model. Thus a vector field was created. The field represents forces that push the model away from overlapping sections. After the heart model was translated by the vector field, the procedure was repeated iteratively. A decreasing weighting factor thereby ensures the convergence of the procedure. The iterative process was stopped when either no more voxels with segmentation overlap were detected, or the translational improvement was beneath a specified threshold. The latter usually occurs when there is a balance between forces from opposite sides, i.e., lung and ribcage/sternum, which means that the heart model is too large to fit. We then scaled down the heart model to 95% of its size and restarted the iterative position adjustment until finally no more regions with overlapping segmentations remained.

We note that this algorithm does not provide a segmentation of the heart that fully incorporates wide shape variations. Since the heart model is static, it cannot fully fit the actual image data. Nevertheless, it still can be used as an approximation of a segmentation result and as a new initial position for further, more advanced segmentation algorithms like active contour models that have yet to be implemented.

## 2.5.

### Validation of Segmentation Results

As a reference for evaluation of segmentation results, we used gold-standard manual segmentation revised by an expert specialized in mouse anatomy. We segmented the whole skeleton, both lobes of the lung, and the heart of a CT volume image with a size of $267\times 242\times 452$ voxels on a $64\text{-bit}$ PC with a quad core CPU $\left(2.67\phantom{\rule{0.3em}{0ex}}\mathrm{GHz}\right)$ and $4\phantom{\rule{0.3em}{0ex}}\mathrm{GB}$ of RAM. Notice that results of the bone segmentation will always be constant. The lung segmentation algorithm, on the other hand, picks randomly only a few of many possible seed points, thus producing different results. Since the heart position approximation depends on the lung segmentation result, these results vary too. To compensate for this fact, we performed the segmentation process several times to yield the mean performance.

As a main criterion for the evaluation, we used the Dice coefficient $0\u2a7ds\u2a7d1$ with

which measures the similarity of two sets $X$ and $Y$ , i.e., the manually and automatically segmented data volumes. Other criteria were the false rejection rate (FRR) and the false acceptance rate (FAR) The Dice coefficient is a more general measure for accuracy of the segmentation that the FRR and FAR can also show, if segmentation errors are due to over- or undersegmentation. The FRR measures the amount of voxels of the manually segmented data that were not segmented by the automatic framework (undersegmentation), while the FAR measures the number of voxels of the segmented data that do not belong to the respective tissue (oversegmentation).## 3.

## Fluorescence Molecular Tomography Reconstruction

For fluorescence tomography, the propagation of photons in the tissue was modeled by using the diffusion approximation to the radiative transport equation

where $D$ and ${\mu}_{a}$ are the spatially varying diffusion and absorption coefficients, $n\propto c$ is a function proportional to the fluorochrome concentration $c$ , and ${U}_{x}$ and ${U}_{m}$ describe the photon density at the excitation and emission wavelength. If $D$ and ${\mu}_{a}$ are known Green’s functions $G(r,{r}^{\prime})$ , a solution is given byleading to## Eq. 12

$${U}_{m}\left(r\right)=-{\int}_{{r}^{\prime}\u220aV}G(r,{r}^{\prime})n\left({r}^{\prime}\right){U}_{x}\left({r}^{\prime}\right)d{r}^{\prime}.$$Equation 12 can be inverted by standard methods to yield the concentration measurement
$n$
for each voxel
${r}^{\prime}$
of the volume data
$V$
. Successful inversion requires knowledge of the photon density
$x$
, which we modeled by using the same Green’s functions as
${U}_{m}$
. Green function computations were based on a finite element solution of the diffusion equation.^{33} The finite element mesh was created based on the CT volume data, where the surface of the mouse itself defines the boundary of the mesh. After segmentation, average optical properties representative of the tissue type represented by each node were assigned to the node.

Equation 12 can be transformed into the linear system $Wx=y$ through discretization. Here, $W$ contains the contribution of the integral over $G$ , $x$ is the discretized vector of the concentration values $n$ , and $y$ is the vector of measurements. This equation is usually ill-conditioned and thus a stable solution can be found by minimization of a regularized residual

The anatomical priors from the segmentation procedure were integrated in the regularization term by using Laplace regularization as proposed in Refs. 13, 15. Here, matrix $\mathbf{L}$ is defined by## Eq. 14

$$\mathbf{L}=\left[\begin{array}{cccc}{l}_{1,1}& {l}_{2,1}& \dots & {l}_{w,1}\\ {l}_{1,2}& {l}_{2,2}& \ddots & \vdots \\ \vdots & \ddots & \ddots & {l}_{w,w-1}\\ {l}_{1,w}& \dots & {l}_{w-1,w}& {l}_{w,w}\end{array}\right],$$## Eq. 15

$${l}_{i,j}=\{\begin{array}{ll}1& \text{if}\phantom{\rule{0.3em}{0ex}}i=j\\ -\frac{1}{{w}_{s}}& \text{if}\phantom{\rule{0.3em}{0ex}}\text{voxels}\phantom{\rule{0.3em}{0ex}}i,j\phantom{\rule{0.3em}{0ex}}\text{are}\phantom{\rule{0.3em}{0ex}}\text{part}\phantom{\rule{0.3em}{0ex}}\text{of}\phantom{\rule{0.3em}{0ex}}\text{the}\phantom{\rule{0.3em}{0ex}}\text{same}\phantom{\rule{0.3em}{0ex}}\text{region}\phantom{\rule{0.3em}{0ex}}s\\ 0& \text{otherwise}\end{array}\phantom{\}},$$The Laplace prior employed here smoothes estimated fluorochrome distributions within a region while it allows for strong differences across the boundaries of the regions. For comparison to reconstructions without anatomical *a-priori* knowledge, we also used the common Thikonov regularization, with
$\mathbf{L}=\mathbf{I}\mathbf{d}$
, which does not include structural priors.

## 4.

## Results

## 4.1.

### Segmentation

Figure 4 shows the empiric results of the bone segmentation. Notice that very thin bone structures like the blade bones exhibit holes. In our CT images, these structures show lower intensities than bone usually does due to blurring artifacts. Overall, the results yielded Dice coefficients of 0.8721. FRR (0.1062) and FAR (0.1485), which show that these operations resulted in oversegmentation. When we visually evaluated the result, we recognized that nearly all segmentation errors occurred along the borders. This is mainly due to blurring artifacts at the borders between different tissues. Thus we considered these errors to be within normal uncertainty bounds. The segmentation of the bones took $3.3\phantom{\rule{0.3em}{0ex}}\mathrm{sec}$ , and the recognition of the ribcage took $1.2\phantom{\rule{0.3em}{0ex}}\mathrm{sec}$ , which is very fast for data volumes of such large size.

For the lung segmentation, we analyzed 30 segmentations of our reference image data. We experienced that five seed points per lobe of the lung usually were enough to achieve an accurate and robust segmentation result while still being time efficient. The results are displayed in Fig. 5 . The framework achieved a mean Dice coefficient of 0.766 with a variance of 0.007. Nonsegmented voxels (FRR 0.3096) had the greatest influence on this result, while oversegmentation was much smaller (FAR 0.1091). Falsely accepted voxels were usually part of the bronchial tubes outside the lung. The falsely rejected voxels were mostly voxels with a considerably higher intensity, where the lung tissue showed pathologies. The speed of the lung segmentation differs, since the number of iterations of the seed growing algorithm depends on the initial seed point. Usually the segmentation was done in less than $30\phantom{\rule{0.3em}{0ex}}\mathrm{sec}$ , including the search for appropriate seed points.

The heart segmentation was also done 30 times. In Fig. 6 you can see the adapted heart model inside the ribcage. The mean Dice coefficient was 0.7647 and had a variance of only 0.0004. Considering that we only used a static model build from one single training dataset, we consider this a very good result. Most notably, this result was due to the quite high FRR of 0.3378, while only 0.0936 of the segmented voxels were falsely accepted. The time needed for the approximation of the heart also depends on the initial position. On our test data it took less than $45\phantom{\rule{0.3em}{0ex}}\mathrm{sec}$ to adjust the heart model.

## 4.2.

### Reconstruction

Figure 7
shows the results of the utilization of XCT anatomical information as priors in an FMT inversion scheme. The FMT images are laid over the corresponding CT slice. To simplify matters, only one slice out of a reconstructed volume is presented for each approach. For the evaluation of the reconstruction improvement using anatomical priors, we simulated a situation of inflamed lungs [Fig. 7], modeled after previous studies of lung inflammation,^{2, 34} and used three different reconstruction procedures, i.e., 1. no regularization, 2. inversion using Thikonov regularization, and 3. the Laplace regularization. However, segmentation of the *in-vivo* CT imaging data was done using our framework, and no simulated segmentation was used. We note that the first two approaches do not utilize the segmented information image priors, and that no noise was added in the simulation.

Figure 7 shows the inversion obtained without regularization. In this case the inversion generates significant artifacts, especially on the borders leading to a highly inaccurate reconstruction. Figure 7 depicts high blurring of the fluorescent signal. The intensity of the signal is also too low, and a prominent spot can be recognized in one lobe of the lung while the intensity should be homogeneous. Finally, Fig. 7 shows the best reconstruction results due to the priors. The fluorescence intensity was reconstructed accurately; it is distributed homogeneously in the lung and only small blurring artifacts occur along the borders.

## 5.

## Discussion

We have introduced an automatic segmentation scheme for bones, lungs, and the heart for streamlining FMT-XCT inversion. The framework utilized several segmentation and signal processing methods in an automatic manner. Another advantage of the framework is its speed. The segmentation, even in very large volume data, was done in less than $2\phantom{\rule{0.3em}{0ex}}\mathrm{min}$ . This renders the approach very useful to integrate it subtly into the FMT reconstruction of our hybrid FMT/XCT imaging system. We proved the quality of the segmentation compared to a gold-standard manual segmentation.

However, the framework still does not exploit its full potential. Most of the parameters of the algorithms were chosen by educated guess and were roughly adapted through examining the measured segmentation quality. We think that optimizing these parameters could improve the segmentation results even more. Most notably there are three parts of the framework that would, in our opinion, benefit from a closer analysis of the parameter values. 1. The computation of a threshold for bone segmentation. Here, the parameter $w$ [Eq. 1] could be adapted to achieve better bone segmentation. 2. The detection of seed points for lung segmentation. The interval that is used to detect those points could be adapted to yield seed points that are more feasible for the subsequent region growing. 3. The parameters $\sigma $ and $m$ for the seed growing itself. They heavily influence the algorithm, and we do not know the values to yield optimal results. It should also be considered that the segmentation of the lung and heart depends on the correct detection of the ribcage, and so far the robustness of our approach could not be evaluated. Thus this essential part of the framework should be investigated and improved further. Also, the accuracy of the heart segmentation could be improved significantly. Here, the static, undeformable model proves to be a disadvantage, since it cannot fully adapt to the shape variances. Nevertheless, the approach could be used to initialize more complex segmentation methods such as deformable models that use flexible meshes to overcome this handicap.

We have also shown how the gained anatomical information can be used as *a-priori* knowledge for the reconstruction of FMT images. We proved that this increases FMT image quality considerably in simulations. Further studies have to be conducted to prove this behavior also for real FMT measurements in *in-vivo* experiments. From our experience, we expect notable FMT image quality improvements in those studies as well. Nevertheless, the conclusion is that we have to put focus on the segmentation of more structures for even better, more accurate FMT reconstruction results. It also has to be discussed how accurate the segmentations need to be, and if more time-consuming and complex segmentation algorithms are actually necessary and practical, because there will always be a tradeoff between speed and accuracy.

## Acknowledgments

We acknowledge the help of our laboratory technicians Claudia Mayerhofer and Christoph Drebinger, who cared for our animals and provided much help on our experiments. We also thank Harry Höllig, Peter Hamm, and Thomas Jetzfellner for many fruitful discussions and their precious exterior views to our work that helped us to see some problems from other perspectives. This work was partly funded by the European Commission FP7 FMT-XCT project.

## References

**,” Inverse Probl., 15 (2), R41 –R93 (1999). https://doi.org/10.1088/0266-5611/15/2/022 0266-5611 Google Scholar**

*Optical tomography in medical imaging***,” Nat. Biotechnol., 23 (3), 313 –320 (2005). https://doi.org/10.1038/nbt1074 1087-0156 Google Scholar**

*Looking and listening to light: the evolution of whole-body photonic imaging***,” Opt. Lett., 32 (4), 382 –384 (2007). https://doi.org/10.1364/OL.32.000382 0146-9592 Google Scholar**

*Free-space fluorescence molecular tomography utilizing $360\phantom{\rule{0.3em}{0ex}}\text{degrees}$ geometry projections***,” Phys. Rev. Lett., 91 (10), 103901 (2003). https://doi.org/10.1103/PhysRevLett.91.103901 0031-9007 Google Scholar**

*Free-space propagation of diffuse light: theory and experiments***,” Opt. Lett., 28 (18), 1701 –1703 (2003). https://doi.org/10.1364/OL.28.001701 0146-9592 Google Scholar**

*Noncontact optical tomography of turbid media***,” IEEE Trans. Med. Imaging, 23 (4), 492 –500 (2004). https://doi.org/10.1109/TMI.2004.825633 0278-0062 Google Scholar**

*Experimental fluorescence tomography of tissues with noncontact measurements***,” Opt. Lett., 31 (6), 769 –771 (2006). https://doi.org/10.1364/OL.31.000769 0146-9592 Google Scholar**

*Comparison of noncontact and fiber-based fluorescence-mediated tomography***,” Phys. Rev. E, 70 (5), 056616 (2004). https://doi.org/10.1103/PhysRevE.70.056616 1063-651X Google Scholar**

*Symmetries, inversion formulas, and image reconstruction for optical tomography***,” Phys. Med. Biol., 50 (10), 2351 –2364 (2005). https://doi.org/10.1088/0031-9155/50/10/012 0031-9155 Google Scholar**

*Multiple projection optical diffusion tomography with plane wave illumination***,” Inverse Probl., 23 (4), 1445 –1465 (2007). https://doi.org/10.1088/0266-5611/23/4/006 0266-5611 Google Scholar**

*On the convergence of the Born series in optical tomography with diffuse light***,” Opt. Lett., 31 (4), 471 –473 (2006). https://doi.org/10.1364/OL.31.000471 0146-9592 Google Scholar**

*Reconstructing absorption and diffusion shape profiles in optical tomography by a level set technique***,” Opt. Express, 14 (12), 5394 –5410 (2006). https://doi.org/10.1364/OE.14.005394 1094-4087 Google Scholar**

*Data subset algorithm for computationally efficient reconstruction of 3-D spectral imaging in diffuse optical tomography***,” Opt. Express, 15 (7), 4066 –4082 (2007). https://doi.org/10.1364/OE.15.004066 1094-4087 Google Scholar**

*Image-guided diffuse optical fluorescence tomography implemented with Laplacian-type regularization***,” Technol. Cancer Res. Treat., 5 (4), 351 –363 (2006). 1533-0346 Google Scholar**

*Combined diffuse optical tomography (DOT) and MRI system for cancer imaging in small animals***,” Phys. Med. Biol., 50 (12), 2837 –2858 (2005). https://doi.org/10.1088/0031-9155/50/12/008 0031-9155 Google Scholar**

*Diffuse optical tomography with a priori anatomical information***,” Appl. Opt., 44 (10), 1948 –1956 (2007). https://doi.org/10.1364/AO.44.001948 0003-6935 Google Scholar**

*Optimal linear inverse solution with multiple priors in diffuse optical tomography***,” Opt. Express, 15 (13), 8043 –8058 (2007). https://doi.org/10.1364/OE.15.008043 1094-4087 Google Scholar**

*Structural information within regularization matrices improves near infrared diffuse optical tomography***,” Opt. Lett., 32 (8), 933 –935 (2007). https://doi.org/10.1364/OL.32.000933 0146-9592 Google Scholar**

*Image-guided optical spectroscopy provides molecular-specific information in vivo: MRI-guided spectroscopy of breast cancer hemoglobin, water, and scatterer size***,” Rev. Sci. Instrum., 79 (6), 064302 (2008). https://doi.org/10.1063/1.2919131 0034-6748 Google Scholar**

*Magnetic resonance-coupled fluorescence tomography scanner for molecular imaging of tissue***,” Neuroimage, 44 (4), 1304 –1311 (2009). https://doi.org/10.1016/j.neuroimage.2008.10.038 1053-8119 Google Scholar**

*Hybrid FMT-CT imaging of amyloid-beta plaques in a murine Alzheimer’s disease model***,” Phys. Med. Biol., 52 (18), 5569 –5585 (2007). https://doi.org/10.1088/0031-9155/52/18/007 0031-9155 Google Scholar**

*Fluorescence diffuse optical tomography with functional and anatomical a priori information: feasibility study***,” Rev. Sci. Instrum., 69 (12), 4221 –4233 (1998). https://doi.org/10.1063/1.1149235 0034-6748 Google Scholar**

*Time-correlated single photon counting imager for simultaneous magnetic resonance and near-infrared mammography***,” P. Natl. Acad. Sci. USA, 97 (6), 2767 –2772 (2000). https://doi.org/10.1073/pnas.040570597 Google Scholar**

*Concurrent MRI and diffuse optical tomography of breast after indocyanine green enhancement***,” Adv. Exp. Med. Biol., 530 215 –224 (2003). 0065-2598 Google Scholar**

*Hemoglobin imaging with hybrid magnetic resonance and near-infrared diffuse tomography***,” IEEE Trans. Med. Imaging, 29 (2), 365 –373 (2010). https://doi.org/10.1109/TMI.2009.2035310 0278-0062 Google Scholar**

*Hybrid system for simultaneous fluorescence and x-ray computed tomography***,” Radiology, 201 (2), 359 –364 (1996). 0033-8419 Google Scholar**

*Automatic liver segmentation technique for three-dimensional visualisation of CT data***,” IEEE Trans. Med. Imaging, 20 (6), 490 –498 (2001). https://doi.org/10.1109/42.929615 0278-0062 Google Scholar**

*Automatic lung segmentation for accurate quantitation of volumetric x-ray CT images***,” 415 –418 (2000). Google Scholar**

*A fast approach to lung segmentation in x-ray CT images***,” 1081 –1086 (2006). Google Scholar**

*Automatic single-organ segmentation in computed tomography images***,” Comput. Med. Imaging Graph., 24 (1), 1 –9 (2000). https://doi.org/10.1016/S0895-6111(99)00039-7 0895-6111 Google Scholar**

*Automatic segmentation of the colon for virtual colonoscopy***,” Opt. Lett., 26 (12), 893 –895 (2001). https://doi.org/10.1364/OL.26.000893 0146-9592 Google Scholar**

*Experimental three-dimensional fluorescence reconstruction of diffuse media by use of a normalized Born approximation***,” IEEE Trans. Med. Imaging, 24 (10), 1377 –1386 (2005). https://doi.org/10.1109/TMI.2005.857213 0278-0062 Google Scholar**

*Accuracy of fluorescent tomography in the presence of heterogeneities: study of the normalized born ratio***,” ACM Trans. Math. Softw., 50 24.21 –24 (2007). https://doi.org/10.1145/1268776.1268779 0098-3500 Google Scholar**

*deal.II—a general-purpose object-oriented finite element library***,” J. Appl. Physiol., 104 (3), 795 –802 (2008). https://doi.org/10.1152/japplphysiol.00959.2007 8750-7587 Google Scholar**

*Visualization of pulmonary inflammation using noninvasive fluorescence molecular imaging*