Translator Disclaimer
1 May 2010 Fast automatic segmentation of anatomical structures in x-ray computed tomography images to improve fluorescence molecular tomography reconstruction
Author Affiliations +
Abstract
The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.

1.

Introduction

Optical tomography has drawn significant attention in recent years due to its operational simplicity and the rich contrast offered, especially when employing targeted fluorochromes. Fluorescence molecular tomography (FMT) in particular has been shown to be capable of resolving highly versatile cellular and subcellular contrast in whole animals1, 2 in vivo and noninvasively. There have been significant technological developments in FMT methods, especially associated with 360-deg projection free-space techniques that avoid the use of matching fluids,3, 4, 5, 6 the use of charge-coupled device (CCD) cameras for high spatial sampling of data fields,7 and the development of fast and tomographic algorithms to impart quantitative 3-D imaging.8, 9, 10, 11, 12 In addition, the use of early photons has further shown imaging improvements over constant intensity illumination data. These developments essentially bring out the full potential of stand-alone diffuse optical tomography methods.

The use of image priors has been also considered for further improving the performance of the optical tomography reconstruction over stand-alone systems.13, 14, 15, 16, 17 A common approach is the utilization of anatomical information for the construction of a more accurate solution to the forward problem, or the regularization of the ill-posed inverse problem, resulting in improved image fidelity and resolution. To capitalize on the improvements that are offered by the use of image priors, there has been recent interest in the development of hybrid imaging systems.18, 19, 20, 21, 22, 23, 24 Our group has recently developed a fully integrated FMT x-ray computed tomography (XCT) scanner, where all optical and CT components are mounted on a common gantry.25 This modality provides accurately registered CT data that can be used to improve FMT image quality. A particular requirement that in consequence arose is the segmentation of the CT data to identify different organs or structures in the tissue imaged. This is important for three main reasons. The identification of different structures and their corresponding interfaces allows the generation of more accurate numerical meshes for the optical tomography problem. Importantly, they also allow for the assignment of optical properties, based on the knowledge of the optical properties of the organ or structure segmented, since there is no direct relation between x-ray CT images and optical attenuation. Finally, the resolved structures can then be used to guide the inversion scheme utilized, as further explained in the methods.

We therefore considered an automatic segmentation scheme for streamlining the FMT-XCT inversion. Several approaches have been suggested in the past for automated segmentation of medical CT images.26, 27, 28, 29, 30 However, the segmentation and subsequent utilization of the results into the FMT-XCT code required different image processing approaches compared to published methods for medical CT data. The differences can be attributed related to the use of μCT data, i.e., data of varying noise levels and reduced image contrast between organs compared to clinical CT data.

In addition, the work here considers an automatic integration scheme of segmented data into the FMT inversion scheme. Particular attention has been given to obtaining efficient computation to reach fast inversion times. For this reason, attention was given to the use of low dimensional spaces and adaptive parameter definition that can be solved using minimum computing requirements in terms of memory and CPU time.

In the following, we introduce the framework developed, examined for segmentation in the torso, as it relates to the study of lung disease. We present the segmentation tools employed, their performance with experimental mouse images, and the consequent integration of the results into a finite-element method (FEM)-based FMT inversion code.

2.

Automatic Detection of Anatomical Structures

Automatic detection of specific structures has been of great interest in medical imaging fields. Different approaches have been developed in the last few decades for image segmentation. Typically, the solutions presented work optimally for a particular set of problems and cannot be generalized for any segmentation specification. We consider segmentation of three major structures in the mouse torso, i.e., skeletal tissue, lung, and heart. The image data were taken using a commercial micro-CT25 with a tube voltage of 80kV and an electric current of 450μA . The selection of the torso was driven by an elevated interest to study lung cancer and lung inflammatory diseases such as asthma and COPD associated with pharmacological studies. We found that each tissue required different image processing steps for optimal segmentation, as described in the following.

2.1.

Bone Segmentation

Since bone structures exhibit high contrast on CT images, they can be easily identified with a conventional application of a threshold, which conveniently is also a fast operation. To automatically assign a threshold, we examined the histogram of the intensities of the CT volume data. When dealing with a normalized scale like the Hounsfield scale, it is straightforward to select a certain threshold that divides the image in bone and background. However, in this work we make no assumption on the CT data scaling so that the method can work seamlessly with different CT acquisition parameters and datasets, since in small animal imaging there exists less standardization between the data obtained, compared to clinical data. The analysis of many histograms of our CT data had shown that there are no significant features representing the intensity of bone tissue, like a local maxima or minima, which could be traced. The intensity of the bones is usually widely distributed throughout the histogram. Therefore we approximated our threshold Tb by finding other distinct intensities, and assume that there is a linear relationship between those intensities and the threshold we need. Mathematically, this can be described by

Eq. 1

Tb=I1+w(|I1I2|),
with I1 and I2 being the reference intensity points and w being a factor for weighting the distance between those points.

When considering a typical histogram of a mouse CT, two distinct peaks can be noticed that could be used as reference points (Fig. 1 ). The highest peak can be found at the left side of the histogram and corresponds to voxels of very low density, in this case primarily the voxels corresponding to the air in the field of view surrounding the animal. A second significant peak, corresponding to water, can also be easily identified as a second maximum in the histogram. Soft tissue contains high amounts of water, and the area around that peak essentially indicates voxels corresponding to soft tissue. These two peaks can be employed for approximating the optimal threshold in Eq. 1. To determine the scaling factor w in Eq. 1, we considered the Hounsfield scale, since it is a common standard for CT images. In the Hounsfield scale, air has an intensity of 1000 Hounsfield units (HU), water has an intensity of 0HU , and bone structures start at 400HU , i.e., 0.4 of the water-air difference. Thus in the Hounsfield scale, the threshold for bone structures Tb at 400HU can be determined by rewriting Eq. 1 as

Eq. 2

Tb=Iwater+0.4(IwaterIair),
with Iair and Iwater being the intensities of air and water in HU. This equation was utilized to compute the threshold in CT volume data with arbitrary units. The respective peaks representing the intensities of air and water can be determined by simple maxima detection in the according histogram. Using this threshold, the CT images were converted to binary for subsequent processing as described in the following.

Fig. 1

From the CT volume data, a histogram of the intensities was computed. In this example you can recognize the two distinct peaks (arrows) that represent soft tissue, consisting mostly of water and air.

036006_1_017003jbo1.jpg

2.2.

Ribcage Detection

For segmentation of lung and heart, we considered first the identification of orientation points to serve as initial points for subsequent segmentation steps. In this role, the ribcage serves as an easily identifiable structure that accurately delineates a big part of the outer surfaces of the lung and heart. To identify the ribcage, we analyzed the result of the bone segmentation by computing a histogram of the number of segmented bone voxels per axial slice. We treated the histogram as a signal ha(t) , where t is the slice number. Within this signal the ribcage creates a distinct harmonic frequency [Fig. 2 ]. To detect the periodicity, we opted for the use of a Gabor filter; essentially a Gaussian function multiplied with a cosine, i.e.,

Eq. 3

gσ,λ(t)=exp(t22σ2)cos(2πtλ),
where the parameters σ and λ define the width and frequency of the filter.

Fig. 2

(a) The histogram function ha(t) displays the number of segmented bone voxels per slice. Note the very harmonious frequency produced by the ribcage between the dotted lines. The lines mark the beginning and the end of the sternum. (b) The response of the Gabor filter has its highest peak close to the center of the ribcage. The solid line connecting all peaks is the interpolation of all maxima of the filter response.

036006_1_017003jbo2.jpg

This approach is similar to template matching, where the Gabor function describes the periodic oscillation of the ribcage. Since the frequency of the Gabor filter needs to match the frequency the ribcage produces in ha(t) , specific values for σ and λ had to be defined. To determine those values, we analyzed the frequency produced by the ribcage in three training datasets and adjusted σ and λ so that the Gabor filter fitted this frequency. When performing the ribcage detection on unknown test data, we used the differences of the voxel spacing and voxel size in the training and test data sets to compute a scaling factor for σ and λ . Thus the procedure is independent from scaled image data. Inherent in this procedure is the assumption that the size of the imaged mice does not vary significantly, and that the ribs have a distinct separation to each other. To apply the filter, we convoluted the histogram signal ha(t) with the Gabor filter gσ,λ(t) . The result of the convolution ha(t)* gσ,λ(t) is a filter response that usually had its global maximum near the axial center of the ribcage [Fig. 2]. Furthermore, we used just the local maxima of the filter response to interpolate a new function [also Fig. 2]. In this function, the next minima to the left and right sides of the global maximum (the center of the ribcage) were selected as landmark points that defined a bounding box in the axial direction around the ribcage. Those landmarks do not necessarily mark specific anatomical points, but usually they appear near the top and bottom endings of the sternum.

To define the bounding box also in sagittal and coronal directions, we computed the histograms hs(t) and hc(t) indicative of the number of segmented bone voxels in these directions. Note that we only used the slices between the axial landmark points to compute those histograms. In the histograms, we searched for the global maxima and the first slices left and right to them, where hs(t) and hc(t) respectively equal zero, that is, were the ribcage ends. We confined our bounding box only around the ribcage and excluded artifacts outside the mouse that sometimes occur. If this detection scheme experiences difficulties due to noise and artifacts, it is possible to employ searches that define areas where hs(t) and hc(t) become smaller than a predetermined value greater than zero to get a bounding box tight around the ribcage. Overall, ribcage determination is an essential step for further detection of anatomic structures, as described in the following.

2.3.

Lung Segmentation

To enable lung segmentation, we utilized a seed growing algorithm within the confines of the detected ribcage. For this purpose, possible seed points inside the lungs had to be found automatically. The whole respiratory system can be naturally recognized on XCT images by means of its low density and the corresponding high contrast to surrounding tissue. However, image intensity and contrast alone did not suffice for accurate detection. This was because the bounding box of the seed growing algorithm was usually still wide enough to occasionally contain regions of air outside the mouse or parts of the digestive system that showed also a low intensity and contrast. Other challenges involved the blurring of borders due to possible moving artifacts. To refine the region of interest and achieve a correct segmentation result, we created a spherical region of interest (SROI) inside the bounding box. The SROI was initialized as a sphere with a radius of zero at the center of the bounding box [see Fig. 3 ], and was allowed to grow so that the sphere radius measured 90% of the distance between the center of the sphere and the boundary of the mouse. In Fig. 3 a CT image is displayed with the according bounding box (solid blue line) and the SROI (bright circle).

Fig. 3

(a) The image shows a CT slice including the SROI (brighter spherical region), the computed bounding box (solid line), and possible seed points (white spots). The dotted line marks the middle of the bounding box, dividing right and left lobes of the lung. (b) The graph shows the intensity distribution within the region of interest. The peak represents water in the soft tissue.

036006_1_017003jbo3.jpg

To find seed points inside the SROI, we computed an intensity histogram from all the voxels inside the SROI [Fig. 3]. Here, the voxels with the lowest intensity Ilow mark the dark bronchial tubes, and the high peak Ipeak marks soft tissue. We took these easy to detect points as references to compute an interval [I1,I2] , where

Eq. 4

I1=23Ilow+13Ipeak,
and

Eq. 5

I2=I1+0.5(IpeakI1),
which represents the intensity of voxels that by consequence belong to lung tissue. The parameters of Eqs. 4, 5 were roughly determined empirically using about five datasets. All voxels of the SROI that possessed this intensity were considered possible seed points for the seed growing algorithm. In Fig. 3, these seed point possibilities are marked green.

For the seed growing algorithm itself, a mean intensity I¯a was defined using the chosen seed point s and its neighboring voxels. Also, a confidence interval was defined by

Eq. 6

[I¯amσ,I¯a+mσ],
with σ representing the intensity’s standard deviation of the seed point and its neighboring voxels, and m serving as a multiplier to manually control the width of the interval. The algorithm iteratively searched for all voxels that had intensities within the confidence interval and that were connected to the seed point or an already segmented voxel. To avoid oversegmentation, we chose m to be very small. To compensate the resulting undersegmentation, we used multiple, randomly chosen seed points, thereby computing multiple segmentations and combining them. Since the algorithm is sensitive to noise, we smoothed the result by using a Gaussian filter, thereby interpolating small gaps and holes. Finally, we rebinarized the image using a threshold filter.

Because we also wanted to be sure that both the right and left lobes of lung are segmented, we chose an equal number of seed points from both. We distinguished between the right and left lobe by simply dividing our bounding box in the middle.

2.4.

Heart Position Approximation and Segmentation

The last procedure of our framework is the approximation of the heart position and its segmentation. For this purpose we propose the use of a shape model of the heart generated from manually segmented training data. In this work we used one manually segmented volume dataset to gain this model. The model is a closed mesh that consists of a number of vertices connected through edges. To yield a rough initial position and scaling factor for the model, we used the bounding box from the ribcage detection as reference. Using the training dataset, we examined the heart position relative to the borders of the bounding box by considering the box as a normalized cube with a side length of 1. We initialized the heart model at the same position in bounding boxes of other CT volume images. A scaling factor for controlling the size of the heart model was approximated using the sizes of the bounding boxes around the ribcages of training and test datasets as references. Scaling factors were computed for all three directions and averaged to get the main scaling factor. This averaging results in a more robust scaling, for our experiments had shown that the sizes of the bounding boxes varied enough to receive unusual heart shapes.

This operation generally placed the heart model close to its supposed position. However, it also attained regions where the heart model was overlapping the other segmentations of the lung and bone structures. This is because of the rough initial position and scale approximation. To adjust the model position, we searched for all of the overlapping voxels and created for each one a unit vector that points to the center of gravity of the heart model. Thus a vector field was created. The field represents forces that push the model away from overlapping sections. After the heart model was translated by the vector field, the procedure was repeated iteratively. A decreasing weighting factor thereby ensures the convergence of the procedure. The iterative process was stopped when either no more voxels with segmentation overlap were detected, or the translational improvement was beneath a specified threshold. The latter usually occurs when there is a balance between forces from opposite sides, i.e., lung and ribcage/sternum, which means that the heart model is too large to fit. We then scaled down the heart model to 95% of its size and restarted the iterative position adjustment until finally no more regions with overlapping segmentations remained.

We note that this algorithm does not provide a segmentation of the heart that fully incorporates wide shape variations. Since the heart model is static, it cannot fully fit the actual image data. Nevertheless, it still can be used as an approximation of a segmentation result and as a new initial position for further, more advanced segmentation algorithms like active contour models that have yet to be implemented.

2.5.

Validation of Segmentation Results

As a reference for evaluation of segmentation results, we used gold-standard manual segmentation revised by an expert specialized in mouse anatomy. We segmented the whole skeleton, both lobes of the lung, and the heart of a CT volume image with a size of 267×242×452 voxels on a 64-bit PC with a quad core CPU (2.67GHz) and 4GB of RAM. Notice that results of the bone segmentation will always be constant. The lung segmentation algorithm, on the other hand, picks randomly only a few of many possible seed points, thus producing different results. Since the heart position approximation depends on the lung segmentation result, these results vary too. To compensate for this fact, we performed the segmentation process several times to yield the mean performance.

As a main criterion for the evaluation, we used the Dice coefficient 0s1 with

Eq. 7

s=2|XY||X|+|Y|,
which measures the similarity of two sets X and Y , i.e., the manually and automatically segmented data volumes. Other criteria were the false rejection rate (FRR) and the false acceptance rate (FAR)

Eq. 8

FRR=|X||XY||X|=1|XY||X|,

Eq. 9

FAR=|Y||XY||Y|=1|XY||Y|.
The Dice coefficient is a more general measure for accuracy of the segmentation that the FRR and FAR can also show, if segmentation errors are due to over- or undersegmentation. The FRR measures the amount of voxels of the manually segmented data that were not segmented by the automatic framework (undersegmentation), while the FAR measures the number of voxels of the segmented data that do not belong to the respective tissue (oversegmentation).

3.

Fluorescence Molecular Tomography Reconstruction

For fluorescence tomography, the propagation of photons in the tissue was modeled by using the diffusion approximation to the radiative transport equation

Eq. 10

[D+μa]Um(r)=n(r)Ux(r),
where D and μa are the spatially varying diffusion and absorption coefficients, nc is a function proportional to the fluorochrome concentration c , and Ux and Um describe the photon density at the excitation and emission wavelength. If D and μa are known Green’s functions G(r,r) , a solution is given by

Eq. 11

[D+μa]G(r,r)=δ(rr),
leading to

Eq. 12

Um(r)=rVG(r,r)n(r)Ux(r)dr.
In addition, to eliminate the influence of varying source intensities and detector sensitivities and to correct for heterogeneous optical coefficients, we used the normalized ratio between fluorescence and transmittance UmUx , as presented by Refs. 31, 32.

Equation 12 can be inverted by standard methods to yield the concentration measurement n for each voxel r of the volume data V . Successful inversion requires knowledge of the photon density x , which we modeled by using the same Green’s functions as Um . Green function computations were based on a finite element solution of the diffusion equation.33 The finite element mesh was created based on the CT volume data, where the surface of the mouse itself defines the boundary of the mesh. After segmentation, average optical properties representative of the tissue type represented by each node were assigned to the node.

Equation 12 can be transformed into the linear system Wx=y through discretization. Here, W contains the contribution of the integral over G , x is the discretized vector of the concentration values n , and y is the vector of measurements. This equation is usually ill-conditioned and thus a stable solution can be found by minimization of a regularized residual

Eq. 13

Wxy2+λLx2min.
The anatomical priors from the segmentation procedure were integrated in the regularization term by using Laplace regularization as proposed in Refs. 13, 15. Here, matrix L is defined by

Eq. 14

L=[l1,1l2,1lw,1l1,2l2,2lw,w1l1,wlw1,wlw,w],
where w is the number of voxels in the CT date volume and l is thus given by

Eq. 15

li,j={1ifi=j1wsifvoxelsi,jarepartofthesameregions0otherwise},
with ws being the number of voxels in region s . The regions are defined by the segmentations, thus utilizing spatial information in the reconstruction.

The Laplace prior employed here smoothes estimated fluorochrome distributions within a region while it allows for strong differences across the boundaries of the regions. For comparison to reconstructions without anatomical a-priori knowledge, we also used the common Thikonov regularization, with L=Id , which does not include structural priors.

4.

Results

4.1.

Segmentation

Figure 4 shows the empiric results of the bone segmentation. Notice that very thin bone structures like the blade bones exhibit holes. In our CT images, these structures show lower intensities than bone usually does due to blurring artifacts. Overall, the results yielded Dice coefficients of 0.8721. FRR (0.1062) and FAR (0.1485), which show that these operations resulted in oversegmentation. When we visually evaluated the result, we recognized that nearly all segmentation errors occurred along the borders. This is mainly due to blurring artifacts at the borders between different tissues. Thus we considered these errors to be within normal uncertainty bounds. The segmentation of the bones took 3.3sec , and the recognition of the ribcage took 1.2sec , which is very fast for data volumes of such large size.

Fig. 4

The result of typical bone segmentation: (a) the original CT slice, (b) the corresponding slice of the segmented data, and (c) surface model of the skeleton computed from the segmentation result.

036006_1_017003jbo4.jpg

For the lung segmentation, we analyzed 30 segmentations of our reference image data. We experienced that five seed points per lobe of the lung usually were enough to achieve an accurate and robust segmentation result while still being time efficient. The results are displayed in Fig. 5 . The framework achieved a mean Dice coefficient of 0.766 with a variance of 0.007. Nonsegmented voxels (FRR 0.3096) had the greatest influence on this result, while oversegmentation was much smaller (FAR 0.1091). Falsely accepted voxels were usually part of the bronchial tubes outside the lung. The falsely rejected voxels were mostly voxels with a considerably higher intensity, where the lung tissue showed pathologies. The speed of the lung segmentation differs, since the number of iterations of the seed growing algorithm depends on the initial seed point. Usually the segmentation was done in less than 30sec , including the search for appropriate seed points.

Fig. 5

Result of the lung segmentation: (a) the original CT slice, (b) the corresponding slice of the segmented data, and (c) surface model of the skeleton and the lung computed from the segmentation results.

036006_1_017003jbo5.jpg

The heart segmentation was also done 30 times. In Fig. 6 you can see the adapted heart model inside the ribcage. The mean Dice coefficient was 0.7647 and had a variance of only 0.0004. Considering that we only used a static model build from one single training dataset, we consider this a very good result. Most notably, this result was due to the quite high FRR of 0.3378, while only 0.0936 of the segmented voxels were falsely accepted. The time needed for the approximation of the heart also depends on the initial position. On our test data it took less than 45sec to adjust the heart model.

Fig. 6

Result of the heart segmentation. The image shows surface models of all three segmented anatomical structures.

036006_1_017003jbo6.jpg

4.2.

Reconstruction

Figure 7 shows the results of the utilization of XCT anatomical information as priors in an FMT inversion scheme. The FMT images are laid over the corresponding CT slice. To simplify matters, only one slice out of a reconstructed volume is presented for each approach. For the evaluation of the reconstruction improvement using anatomical priors, we simulated a situation of inflamed lungs [Fig. 7], modeled after previous studies of lung inflammation,2, 34 and used three different reconstruction procedures, i.e., 1. no regularization, 2. inversion using Thikonov regularization, and 3. the Laplace regularization. However, segmentation of the in-vivo CT imaging data was done using our framework, and no simulated segmentation was used. We note that the first two approaches do not utilize the segmented information image priors, and that no noise was added in the simulation.

Fig. 7

(a) Simulated fluorescence signal in the lung. (b) Result without regularization. (c) Tikhonov regularization. (d) Laplace regularization showing the best imaging performance in this case.

036006_1_017003jbo7.jpg

Figure 7 shows the inversion obtained without regularization. In this case the inversion generates significant artifacts, especially on the borders leading to a highly inaccurate reconstruction. Figure 7 depicts high blurring of the fluorescent signal. The intensity of the signal is also too low, and a prominent spot can be recognized in one lobe of the lung while the intensity should be homogeneous. Finally, Fig. 7 shows the best reconstruction results due to the priors. The fluorescence intensity was reconstructed accurately; it is distributed homogeneously in the lung and only small blurring artifacts occur along the borders.

5.

Discussion

We have introduced an automatic segmentation scheme for bones, lungs, and the heart for streamlining FMT-XCT inversion. The framework utilized several segmentation and signal processing methods in an automatic manner. Another advantage of the framework is its speed. The segmentation, even in very large volume data, was done in less than 2min . This renders the approach very useful to integrate it subtly into the FMT reconstruction of our hybrid FMT/XCT imaging system. We proved the quality of the segmentation compared to a gold-standard manual segmentation.

However, the framework still does not exploit its full potential. Most of the parameters of the algorithms were chosen by educated guess and were roughly adapted through examining the measured segmentation quality. We think that optimizing these parameters could improve the segmentation results even more. Most notably there are three parts of the framework that would, in our opinion, benefit from a closer analysis of the parameter values. 1. The computation of a threshold for bone segmentation. Here, the parameter w [Eq. 1] could be adapted to achieve better bone segmentation. 2. The detection of seed points for lung segmentation. The interval that is used to detect those points could be adapted to yield seed points that are more feasible for the subsequent region growing. 3. The parameters σ and m for the seed growing itself. They heavily influence the algorithm, and we do not know the values to yield optimal results. It should also be considered that the segmentation of the lung and heart depends on the correct detection of the ribcage, and so far the robustness of our approach could not be evaluated. Thus this essential part of the framework should be investigated and improved further. Also, the accuracy of the heart segmentation could be improved significantly. Here, the static, undeformable model proves to be a disadvantage, since it cannot fully adapt to the shape variances. Nevertheless, the approach could be used to initialize more complex segmentation methods such as deformable models that use flexible meshes to overcome this handicap.

We have also shown how the gained anatomical information can be used as a-priori knowledge for the reconstruction of FMT images. We proved that this increases FMT image quality considerably in simulations. Further studies have to be conducted to prove this behavior also for real FMT measurements in in-vivo experiments. From our experience, we expect notable FMT image quality improvements in those studies as well. Nevertheless, the conclusion is that we have to put focus on the segmentation of more structures for even better, more accurate FMT reconstruction results. It also has to be discussed how accurate the segmentations need to be, and if more time-consuming and complex segmentation algorithms are actually necessary and practical, because there will always be a tradeoff between speed and accuracy.

Acknowledgments

We acknowledge the help of our laboratory technicians Claudia Mayerhofer and Christoph Drebinger, who cared for our animals and provided much help on our experiments. We also thank Harry Höllig, Peter Hamm, and Thomas Jetzfellner for many fruitful discussions and their precious exterior views to our work that helped us to see some problems from other perspectives. This work was partly funded by the European Commission FP7 FMT-XCT project.

References

1. 

S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl., 15 (2), R41 –R93 (1999). https://doi.org/10.1088/0266-5611/15/2/022 0266-5611 Google Scholar

2. 

V. Ntziachristos, J. Ripoll, L. H. V. Wang, and R. Weissleder, “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol., 23 (3), 313 –320 (2005). https://doi.org/10.1038/nbt1074 1087-0156 Google Scholar

3. 

N. Deliolanis, T. Lasser, D. Hyde, A. Soubret, J. Ripoll, and V. Ntziachristos, “Free-space fluorescence molecular tomography utilizing 360degrees geometry projections,” Opt. Lett., 32 (4), 382 –384 (2007). https://doi.org/10.1364/OL.32.000382 0146-9592 Google Scholar

4. 

J. Ripoll, R. B. Schulz, and V. Ntziachristos, “Free-space propagation of diffuse light: theory and experiments,” Phys. Rev. Lett., 91 (10), 103901 (2003). https://doi.org/10.1103/PhysRevLett.91.103901 0031-9007 Google Scholar

5. 

R. B. Schulz, J. Ripoll, and V. Ntziachristos, “Noncontact optical tomography of turbid media,” Opt. Lett., 28 (18), 1701 –1703 (2003). https://doi.org/10.1364/OL.28.001701 0146-9592 Google Scholar

6. 

R. B. Schulz, J. Ripoll, and V. Ntziachristos, “Experimental fluorescence tomography of tissues with noncontact measurements,” IEEE Trans. Med. Imaging, 23 (4), 492 –500 (2004). https://doi.org/10.1109/TMI.2004.825633 0278-0062 Google Scholar

7. 

R. B. Schulz, J. Peter, W. Semmler, C. D’Andrea, G. Valentini, and R. Cubeddu, “Comparison of noncontact and fiber-based fluorescence-mediated tomography,” Opt. Lett., 31 (6), 769 –771 (2006). https://doi.org/10.1364/OL.31.000769 0146-9592 Google Scholar

8. 

V. A. Markel and J. C. Schotland, “Symmetries, inversion formulas, and image reconstruction for optical tomography,” Phys. Rev. E, 70 (5), 056616 (2004). https://doi.org/10.1103/PhysRevE.70.056616 1063-651X Google Scholar

9. 

V. A. Markel and J. C. Schotland, “Multiple projection optical diffusion tomography with plane wave illumination,” Phys. Med. Biol., 50 (10), 2351 –2364 (2005). https://doi.org/10.1088/0031-9155/50/10/012 0031-9155 Google Scholar

10. 

V. A. Markel and J. C. Schotland, “On the convergence of the Born series in optical tomography with diffuse light,” Inverse Probl., 23 (4), 1445 –1465 (2007). https://doi.org/10.1088/0266-5611/23/4/006 0266-5611 Google Scholar

11. 

M. Schweiger, S. R. Arridge, O. Dorn, A. Zacharopoulos, and V. Kolehmainen, “Reconstructing absorption and diffusion shape profiles in optical tomography by a level set technique,” Opt. Lett., 31 (4), 471 –473 (2006). https://doi.org/10.1364/OL.31.000471 0146-9592 Google Scholar

12. 

S. Srinivasan, B. W. Pogue, H. Dehghani, F. Leblond, and X. Intes, “Data subset algorithm for computationally efficient reconstruction of 3-D spectral imaging in diffuse optical tomography,” Opt. Express, 14 (12), 5394 –5410 (2006). https://doi.org/10.1364/OE.14.005394 1094-4087 Google Scholar

13. 

S. C. Davis, H. Dehghani, J. Wang, S. Jiang, B. W. Pogue, and K. D. Paulsen, “Image-guided diffuse optical fluorescence tomography implemented with Laplacian-type regularization,” Opt. Express, 15 (7), 4066 –4082 (2007). https://doi.org/10.1364/OE.15.004066 1094-4087 Google Scholar

14. 

G. Gulsen, O. Birgul, M. B. Unlu, R. Shafiiha, and O. Nalcioglu, “Combined diffuse optical tomography (DOT) and MRI system for cancer imaging in small animals,” Technol. Cancer Res. Treat., 5 (4), 351 –363 (2006). 1533-0346 Google Scholar

15. 

M. Guven, B. Yazici, X. Intes, and B. Chance, “Diffuse optical tomography with a priori anatomical information,” Phys. Med. Biol., 50 (12), 2837 –2858 (2005). https://doi.org/10.1088/0031-9155/50/12/008 0031-9155 Google Scholar

16. 

A. Li, G. Boverman, Y. H. Zhang, D. Brooks, E. L. Miller, M. E. Kilmer, Q. Zhang, E. M. C. Hillman, and D. A. Boas, “Optimal linear inverse solution with multiple priors in diffuse optical tomography,” Appl. Opt., 44 (10), 1948 –1956 (2007). https://doi.org/10.1364/AO.44.001948 0003-6935 Google Scholar

17. 

P. K. Yalavarthy, B. W. Pogue, H. Dehghani, C. M. Carpenter, S. D. Jiang, and K. D. Paulsen, “Structural information within regularization matrices improves near infrared diffuse optical tomography,” Opt. Express, 15 (13), 8043 –8058 (2007). https://doi.org/10.1364/OE.15.008043 1094-4087 Google Scholar

18. 

C. M. Carpenter, B. W. Pogue, S. D. Jiang, H. Dehghani, X. Wang, K. D. Paulsen, W. A. Wells, J. Forero, C. Kogel, J. B. Weaver, and S. P. Poplack, “Image-guided optical spectroscopy provides molecular-specific information in vivo: MRI-guided spectroscopy of breast cancer hemoglobin, water, and scatterer size,” Opt. Lett., 32 (8), 933 –935 (2007). https://doi.org/10.1364/OL.32.000933 0146-9592 Google Scholar

19. 

S. C. Davis, B. W. Pogue, R. Springett, C. Leussler, P. Mazurkewitz, S. B. Tuttle, S. L. Gibbs-Strauss, S. S. Jiang, H. Dehghani, and K. D. Paulsen, “Magnetic resonance-coupled fluorescence tomography scanner for molecular imaging of tissue,” Rev. Sci. Instrum., 79 (6), 064302 (2008). https://doi.org/10.1063/1.2919131 0034-6748 Google Scholar

20. 

D. Hyde, R. de Kleine, S. A. MacLaurin, E. Miller, D. H. Brooks, T. Krucker, and V. Ntziachristos, “Hybrid FMT-CT imaging of amyloid-beta plaques in a murine Alzheimer’s disease model,” Neuroimage, 44 (4), 1304 –1311 (2009). https://doi.org/10.1016/j.neuroimage.2008.10.038 1053-8119 Google Scholar

21. 

Y. Lin, H. Gao, O. Nalcioglu, and G. Gulsen, “Fluorescence diffuse optical tomography with functional and anatomical a priori information: feasibility study,” Phys. Med. Biol., 52 (18), 5569 –5585 (2007). https://doi.org/10.1088/0031-9155/52/18/007 0031-9155 Google Scholar

22. 

V. Ntziachristos, X. H. Ma, and B. Chance, “Time-correlated single photon counting imager for simultaneous magnetic resonance and near-infrared mammography,” Rev. Sci. Instrum., 69 (12), 4221 –4233 (1998). https://doi.org/10.1063/1.1149235 0034-6748 Google Scholar

23. 

V. Ntziachristos, A. G. Yodh, M. Schnall, and B. Chance, “Concurrent MRI and diffuse optical tomography of breast after indocyanine green enhancement,” P. Natl. Acad. Sci. USA, 97 (6), 2767 –2772 (2000). https://doi.org/10.1073/pnas.040570597 Google Scholar

24. 

B. W. Pogue, H. Q. Zhu, C. Nwaigwe, T. O. McBride, U. L. Osterberg, K. D. Paulsen, and J. F. Dunn, “Hemoglobin imaging with hybrid magnetic resonance and near-infrared diffuse tomography,” Adv. Exp. Med. Biol., 530 215 –224 (2003). 0065-2598 Google Scholar

25. 

R. B. Schulz, A. Ale, A. Sarantopoulos, M. Freyer, E. Soehngen, M. Zientkowska, and V. Ntziachristos, “Hybrid system for simultaneous fluorescence and x-ray computed tomography,” IEEE Trans. Med. Imaging, 29 (2), 365 –373 (2010). https://doi.org/10.1109/TMI.2009.2035310 0278-0062 Google Scholar

26. 

L. M. Gao, D. G. Heath, B. S. Kuszyk, and E. K. Fishman, “Automatic liver segmentation technique for three-dimensional visualisation of CT data,” Radiology, 201 (2), 359 –364 (1996). 0033-8419 Google Scholar

27. 

S. Y. Hu, E. A. Hoffman, and J. M. Reinhardt, “Automatic lung segmentation for accurate quantitation of volumetric x-ray CT images,” IEEE Trans. Med. Imaging, 20 (6), 490 –498 (2001). https://doi.org/10.1109/42.929615 0278-0062 Google Scholar

28. 

J. S. Silva, A. Silva, and S. B. Sousa, “A fast approach to lung segmentation in x-ray CT images,” 415 –418 (2000). Google Scholar

29. 

R. Susomboom, D. Raicu, J. Furst, and D. Channin, “Automatic single-organ segmentation in computed tomography images,” 1081 –1086 (2006). Google Scholar

30. 

C. L. Wyatt, Y. Ge, and D. J. Vining, “Automatic segmentation of the colon for virtual colonoscopy,” Comput. Med. Imaging Graph., 24 (1), 1 –9 (2000). https://doi.org/10.1016/S0895-6111(99)00039-7 0895-6111 Google Scholar

31. 

V. Ntziachristos and R. Weissleder, “Experimental three-dimensional fluorescence reconstruction of diffuse media by use of a normalized Born approximation,” Opt. Lett., 26 (12), 893 –895 (2001). https://doi.org/10.1364/OL.26.000893 0146-9592 Google Scholar

32. 

A. Soubret, J. Ripoll, and V. Ntziachristos, “Accuracy of fluorescent tomography in the presence of heterogeneities: study of the normalized born ratio,” IEEE Trans. Med. Imaging, 24 (10), 1377 –1386 (2005). https://doi.org/10.1109/TMI.2005.857213 0278-0062 Google Scholar

33. 

W. Bangerth, R. Hartmann, and G. Kanschat, “deal.II—a general-purpose object-oriented finite element library,” ACM Trans. Math. Softw., 50 24.21 –24 (2007). https://doi.org/10.1145/1268776.1268779 0098-3500 Google Scholar

34. 

J. Haller, D. Hyde, N. Deliolanis, R. de Kleine, M. Niedre, and V. Ntziachristos, “Visualization of pulmonary inflammation using noninvasive fluorescence molecular imaging,” J. Appl. Physiol., 104 (3), 795 –802 (2008). https://doi.org/10.1152/japplphysiol.00959.2007 8750-7587 Google Scholar
©(2010) Society of Photo-Optical Instrumentation Engineers (SPIE)
Marcus Freyer, Angelique Ale, Ralf B. Schulz, Marta Zientkowska, Vasilis Ntziachristos, and Karl-Hans Englmeier "Fast automatic segmentation of anatomical structures in x-ray computed tomography images to improve fluorescence molecular tomography reconstruction," Journal of Biomedical Optics 15(3), 036006 (1 May 2010). https://doi.org/10.1117/1.3431101
Published: 1 May 2010
JOURNAL ARTICLE
8 PAGES


SHARE
Advertisement
Advertisement
Back to Top