Open Access
1 December 2011 Low-cost three-dimensional imaging system combining fluorescence and ultrasound
Baoqiang Li, Maxime Abran, Carl Matteau-Pelletier, Leonie Rouleau, Frédéric Lesage, Eric Rheaume, Jean-Claude Tardif, Tina Lam, Rishi Sharma, Ashok Kakkar
Author Affiliations +
Abstract
In this paper, we present a dual-modality imaging system combining three-dimensional (3D) continuous-wave transillumination fluorescence tomography with 3D ultrasound (US) imaging. We validated the system with two phantoms, one containing fluorescent inclusions (Cy5.5) at different depths, and another varying-thickness semicylindrical phantom. Using raster scanning, the combined fluorescence/US system was used to collect the boundary fluorescent emission in the X-Y plane, as well as recovered the 3D surface and position of the inclusions from US signals. US images were segmented to provide soft priors for the fluorescence image reconstruction. Phantom results demonstrated that with priors derived from the US images, the fluorescent reconstruction quality was significantly improved. As further evaluation, we show pilot in vivo results using an Apo-E mouse to assess the feasibility and performance of this system in animal studies. Limitations and potential to be used in artherosclerosis studies are then discussed.

1.

Introduction

Diffuse optical fluorescence tomography has gradually been applied in biological research and pharmaceutical industry as it has the potential to lift topographic fluorescence techniques to a quantitative method for imaging molecular and cellular activity using specific fluorescent agents.1, 2, 3 However, while multiple demonstrations of image reconstruction have been published, quantification of fluorescence signals in three-dimensions remains a challenge. Instrumentation for fluorescence imaging comes in different flavors with camera-based broad beam imaging being the most common configuration.1 Pogue found that a raster-scanned point sampling system had advantages over a broad beam CCD camera system toward accurate quantification of fluorescence signals.4 Epi-illumination, which illuminates objects and collects emission on the same side, is severely limited with respect to quantification when probing deep objects in tissue (a few millimeters) due to light absorption and scattering.1 It is also subject to the nonspecific signal contamination such as autofluorescence originating from the surface in small animals. Recent work demonstrated that a camera-based epi-illumination system could possibly resolve reflected green fluorescent protein (GFP)-like fluorescence signals from a depth up to 10 mm in a phantom of optical properties μ a = 0.1 mm−1 and μs = 1 mm−1 (Ref. 5). But the quality of the image deteriorated severely as the depth increased and the absorption coefficient used was not coherent with measured in vivo values for the wavelength employed in this study. However, in transmission mode, a collimated laser beam with large energy deposition, but still under the safety limits, can traverse several centimeters into the tissue. 1, 2, 6, 7, 8, 9, 10, 11, 12, 13

Besides imaging geometries, recent improvements in diffuse fluorescence imaging were made by incorporating structural information into the model-based reconstructions. 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28 For example, Fang used a composition-based image segmentation method to combine x-ray structural priors into diffuse optical tomography (DOT) for breast imaging.14 Kepshire reported a study, which combined x-ray micro-CT with fluorescence and assessed performance using protoporphyrin IX phantoms.24 The benefits to fluorescence imaging from x-ray priors were also demonstrated in several other studies.25, 26, 27 Using the same modality, Tan employed DOT, which might be easily integrated in the fluorescence imager, to provide functional priors for fluorescence reconstruction.17 Structural priors from an MRI have also been investigated for guiding DOT or fluorescence reconstructions.20, 22, 28 Hence, structural information measured from a variety of imaging modalities mentioned above provide prior information that can be incorporated by a regularization method for image reconstruction.19, 21, 23 Outcomes from these works confirmed that prior anatomical information benefited the fluorescence image reconstructions. However, these techniques require instruments associated with large cost and infrastructure. Moreover, they either necessitate custom integration of optical imaging in the MRI/CT imaging chambers, usually leading to lower optical sampling or require a multimodal “animal bed” leading to serial instead of simultaneous imaging. Methods and systems that integrate anatomical information while keeping the lower cost advantages of fluorescence imaging would thus be beneficial.

In this endeavor, a few studies showcased the feasibility to employ ultrasound (US) as a complement to fluorescence imaging. Snyder employed fluorescence imaging and two-dimensional (2D) US imaging to assess tumor size in mice.29 They used both imaging modalities separately and confirmed co-registered tumor detections but did not combine the information. Zhu used two orthogonal US slices to estimate tumor diameter and center.30 The estimated size was then employed to segment the tissue into lesion and background regions aiming to provide a priori knowledge in diffuse optical reconstruction. Zhu have used a phantom imaged using both US and optical absorption to investigate the improvement in image reconstruction in reflection geometry; but recovering small targets in this configuration might be a challenge because of the distance (∼4 mm) between each element of the US detector array used as higher horizontal resolution may be needed, especially for recovering a nonregular object surface.31 In addition to guiding fluorescence imaging, other studies demonstrated that US images help in the estimations of the optical properties. For example, it was demonstrated that the geometrical constraints derived from US signals might provide improvement in computing the optical properties of DOT.32 A recent study showed the recovery of the lesion tissue value by imaging protoporphyrin IX production in skin tumor and demonstrated that the fluorescence emission can be better quantified when using priors obtained by segmenting US image into tissue layers.33

Distinct from the above studies,29, 30, 31, 32, 33 we built a low-cost system combining fluorescence tomography with US imaging in an attempt to explore three-dimensional (3D) images from these two modalities. Instead of combining both modalities in reflection,31 our fluorescence configuration is in transillumination, thereby using the documented quantification benefits of this geometry. A raster-scanned 3D imaging was achieved in both modalities controlled by two motors, providing a simple and low-cost system design using a single US and fluorescence detector. To evaluate the performance of this simple system, we conducted phantoms and animal studies. We segmented 3D US images into background and fluorescence emission regions to provide an accurate structural prior to fluorescence reconstruction. Fluorescence tomography with US priors was facilitated by the co-registered scans. US imaging could also help an investigator interpret functional images.

We characterized the system with phantoms in order to provide answers to whether US can be used to provide informative priors to fluorescence tomography. We also evaluated the feasibility and potential of this system to be used in animal study with an Apo-E mouse.34 Our results show that while US images are difficult to segment and provide limited structural information, their benefits to fluorescence reconstruction are still significant. As a result, this low cost (less than $9k) multimodal fluorescence/US system may provide an interesting avenue toward quantitative molecular imaging.

2.

Methodology

2.1.

System Design

A schematic of the system is shown in Fig. 1. A laser diode (658 nm, HL6512MG, Thorlabs) was used to generate a collimated laser beam illuminating from the bottom of the object. Laser light was further filtered by an optical bandpass filter D650/20 (Chroma Technology). On the opposite side, the emitted photons were detected with an optical fiber used to guide light toward a set of optical filters (Chroma Technology) mounted in a filter wheel (FW103/M, Thorlabs) thus enabling multispectral detection. Filtered photons were collected by a photomultiplier tube (H5783–20, Hamamastu). To eliminate residual ambient light, the laser diode was modulated with a square wave at 1 KHz (software adjustable) and demodulated on detection. For US recordings, the system employed a single element transducer (5 MHz, ø0.5, F = 10 cm, Olympus). The electronics were built to support transducers with frequencies between 2.25 and 30 MHz. The laser and transducer were scanned over the region of interest (ROI) point by point in the X-Y direction using a translation stage controlled by two actuators (L12–100-100–12-I, Firgelli Technologies) in 1 mm steps (positional accuracy: ± 0.3 mm). A home-made electronic circuit controlled the laser diode, derived the transducer, controlled the two actuators, sampled, and preprocessed optical and ultrasonic signals. The received datasets were then sent to a computer through a USB link for post-processing. In addition, a monochrome CMOS camera (DCC1545M, Thorlabs) was used to capture a snapshot to select the scan area. By correlating the pixel index of the snapshot to the positions of both actuators, the ROI for each scan was calibrated. For each point, fluorescence signals were sampled at a frequency of 200 KHz and with an integration time for demodulation of typically 200 ms (software adjustable). For US imaging, each point was sampled at 125 MHz, and typically averaged 1000 times (software adjustable). In order to couple ultrasonic pulse-echoes in the experiments, the object was placed under a water bath separating water from the object with a plastic membrane to conduct both fluorescence and US imaging.

Fig. 1

Schematic of this dual-modality imaging system.

126010_1_1.jpg

2.2.

Reconstruction

A coupled diffusion model was used to simulate fluorescence propagation in a diffusive media.35 The propagation of excitation light is modeled by Eq. 1; the transport of excited fluorescence by Eq. 2.

Eq. 1

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \nabla \cdot [D_x (r)\nabla \phi _x (r,\omega)] {-} \left[ {\mu _{ax} (r) {+} \frac{{j\omega }}{c}} \right]\phi _x (r,\omega) {=} - \delta (r - r_{sk}), \end{equation}\end{document} ·[Dx(r)φx(r,ω)]μax(r)+jωcφx(r,ω)=δ(rrsk),

Eq. 2

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{eqnarray} && \nabla \cdot [D_m (r)\nabla \phi _m (r,\omega)] - \left[ {\mu _{am} (r) + \frac{{j\omega }}{c}} \right]\phi _m (r,\omega)\nonumber\\ &&\quad = - \phi _x (r,\omega)\eta \mu _{fl} (r)\frac{{1 - j\omega \tau (r)}}{{1 + [\omega \tau (r)]^2 }}, \end{eqnarray}\end{document} ·[Dm(r)φm(r,ω)]μam(r)+jωcφm(r,ω)=φx(r,ω)ημfl(r)1jωτ(r)1+[ωτ(r)]2,
where λ x and λ m denote the excitation and emission wavelength, ϕ is the photon flux (W/m2), D is the diffusion coefficient, and μ a is the absorption coefficient. Quantum efficiency, absorption coefficient, and lifetime of fluorophore are represented by η, μfl, and τ, respectively, and c is the velocity of light in the medium.35

We employed the software package NIRFAST to model photon propagation using a finite element model (FEM) for the forward model and to perform reconstructions.36 The inverse model was performed with the following Tikhonov minimization function equation:23

Eq. 3

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \sigma ^2 = \left\{ {\sum\limits_{i = 1}^{{\rm NM}} {\big(\Phi _i^{{\rm Meas}} - \Phi _i^C \big)^2 + \lambda \sum\limits_{j = 1}^{{\rm NN}} {(\chi _j - \chi _0)^2 } } } \right\}, \end{equation}\end{document} σ2=i=1NMΦiMeasΦiC2+λj=1NN(χjχ0)2,
where the measured and simulated boundary fluence are represented by ΦMeas and ΦC, respectively, NM is the total number of measurements, NN is the number of FEM node, λ is the Tikhonov regularization parameter, χ0 is the initial guess of the fluorescence parameter, ημaf in our case, and χj is the parameter to be updated.23

Using Eq. 3 and applying a Levenberg–Marquardt procedure, the update step is performed by:23

Eq. 4

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \Delta \chi = [J^T J + \lambda I]^{ - 1} J^T (\Phi ^{{\rm Meas}} - \Phi ^C), \end{equation}\end{document} Δχ=[JTJ+λI]1JT(ΦMeasΦC),
with △χ = χj−χ0. J is the Jacobian matrix which defines the relationship between the simulated boundary data and fluorescence parameter and I is the identity matrix.23

2.3.

Phantoms

To validate the system we employed two phantoms having different geometries and optical properties. As shown in Fig. 2a, the first one had a rectangular parallelepiped shape and dimension of 100 mm × 30 mm × 20 mm (provided by ART Inc). To model heterogeneous absorption, it included four inclusions with different optical properties, denoted by Diff 1–4 (see Table 1 for optical properties). As illustrated in Figs. 2b, 2c, two holes were drilled along the y direction and fluorescent tubes were inserted, denoted by Fluo 1 and 2. The second phantom [Fig. 2d] had a semicylindrical geometry of 19-mm radius and 105-mm length and was homogeneous. It was used to assess performance in nonregular geometries. A fluorescent tube was inserted in the phantom perpendicular to the curved surface to model nonuniform fluorophore depths. Detailed design information on the two phantoms is provided in Table 1.

Fig. 2

(a) Dimension of the rectangular phantom: Phantom 1. (b) and (c) Schematic depiction showing the four heterogeneities (denoted by Diff 1–4) and two holes for inserting fluorescent tubes (denoted by Fluo 1–2). (d) Dimension of the semicylindrical phantom, Phantom 2.

126010_1_2.jpg

Table 1

Optical properties for both phantoms. Phantom 1 and 2 represent the rectangular phantom and semicylindrical phantom, respectively.

Center position (mm)Dimension (mm)Optical properties (mm−1)
InclusionXYZDiameterXYZμaμ’s
Phantom1Bulk10030200.021.0
Diff 1131571860.0050.5
Diff 23915151860.042.0
Diff 3791571860.01∼0
Diff 4100301.50.012.0
Fluo 1141513530
Fluo 232159530
Phantom2Bulk10038190.011.0
Fluo66198535

For experimental data, the fluorochrome used was Cy5.5 with an absorption peak at 675 nm and emission peak at 694 nm.

3.

Results

3.1.

Sensitivity Tests

We characterized the sensitivity of the fluorescence imaging subsystem using Phantom 2. Fluorescent tubes were inserted with varying concentrations of Cy5.5: 1000, 100, 10, 1, and 0 nm. As the line shows in Fig. 2d and as shown in Fig. 3, we scanned a 30 mm line across the phantom covering part of the fluorescent tube. The detection fiber was approximately 1 cm above the circular center of the cylindrical hole.

Fig. 3

Illustration of measure position in the sensitivity test. The arrows from the left represent the laser diode and the detection fiber respectively.

126010_1_3.jpg

The experimental parameters were as follows: 1 s integration time for one scanned point, 200 KHz acquisition frequency, and 10 mW laser power. We collected the emitted fluorescence with a bandpass filter-710/20D (Chroma Technology). As shown in Fig. 4a, the fluorescence imaging system was sensitive enough to detect 1 nm of Cy5.5 in this phantom. In Fig. 4b, the fitted logarithmic peak amplitudes for different concentrations (1, 10, 100, 1000 nm) are plotted. The linearity curve shows that the amplitudes are approximately linear over close to 3 decades.

Fig. 4

(a) Normalized values of different concentration as a function of scan position. The results show that the system was able to detect 1 nm Cy5.5 in the phantom; (b) the curve shows the fitted logarithmic peak values as a function of concentrations. The triangular markers denote the normalized amplitude of different concentrations.

126010_1_4.jpg

3.2.

Phantom Tests

We employed the two phantoms described above to assess the impact of using the US priors on image reconstruction. The experimental parameters for fluorescence imaging were: 200 ms exposure time for each point, 200 KHz acquisition frequency, and 1 mm scan steps in the X-Y direction. For phantom 1, laser power for absorption/fluorescence measures was 20/50 mW, whereas for phantom 2 it was 10/20 mW. We collected the emitted fluorescence with a long-pass filter, HQ670LP (Chroma Technology), for both cases. The fiber was about 2 mm above the top surface of the phantoms. The source and detector were scanned together as a pair during each fluorescence scan. An absorption scan was also acquired for Born-normalization of fluorescence measures to eliminate the experimental factors. The varying distance between the fiber and the surface of phantom 2 was partly corrected by this normalization (for intensity), but the expanded detection area when the fiber-phantom distance increased caused some imprecision for reconstruction. The detection area with a fiber NA of 0.37 was ∼1.1 mm2 when the fiber was ∼2 mm away from the surface of phantom 2, but expanded to ∼4.2 mm2 on the edges (∼4 mm distance). For the US imaging, we used the transducer mentioned above to scan the same ROI with the same scan steps as the fluorescence subsystem simultaneously. In the experiments, the transducer surface was approximately 4 cm above the top surface of the phantoms and we averaged 1000 times for each scanned point.

Figure 5 shows the Born-normalized37 transmission ratios overlaid over the pictures taken from the camera. As shown in Fig. 5a, a 25 mm×45 mm area has been scanned on phantom 1. In Fig. 5b, a 27 mm×37 mm area has been scanned on phantom 2. In order to couple ultrasonic pulse-echoes and simultaneous optical imaging, imaging was performed in water. The phantoms were put in a container, and then separated the phantoms from water with plastic membranes. US gel was coupled to the phantom surface and then the plastic membrane overlaid to remove bubbles. We injected Cy5.5 at a concentration of 1000 nm into transparent plastic tubes in both cases. As shown in Fig. 5c, the cylindrical tube had varying external diameters of 4.7, 3, and 2.4 mm. The thickness of the wall was 0.6 mm (not shown). We inserted 30 mm of the tube into phantom 1 and 34 mm of the tube into phantom 2, respectively. For phantom 1, we used two identical tubes with fluorochome at the same concentration (1000 nm) but located at different depths. As illustrated in Figs. 5a, 5b, the fluorescence signals decreased from right to left, in accordance with the decreasing dimension of the tube.

Fig. 5

(a) The normalized fluorescence intensity of phantom 1. (b) The normalized fluorescence signal of phantom 2. (c) The dimensions of the plastic tube.

126010_1_5.jpg

For 3D image reconstruction of these two phantoms: 1. mesh of 1 mm resolution was built for each phantom. Nodes in each mesh were assigned with homogeneous optical properties (μ a and μ s ) using the bulk optical properties of Table 1. To get closer to realistic situations in vivo, we did not consider the heterogeneities in phantom 1 because the Born-normalization was expected to eliminate this effect; 2. although the surface contours of the two phantoms were recovered by the US subsystem, for simplicity, we built meshes having rectangular parallelepiped shapes for both phantoms; 3. for the reconstruction, we scaled the experimental Born-normalized ratios by the simulated excitation amplitudes and then used them as input to the forward model above (details of the reconstruction equations and processes can be found in elsewhere36); 4. for US image segmentation we simply used an intensity threshold to identify inclusions in the US images which were thereafter segmented into a binary image. US image segmentation was performed slice by slice. Prior to segmentation, we multiplied the US images with a weight matrix which reduced boundary artefacts. Then we selected the pixels by a single thresholding procedure from this corrected US image generating a binary mask; the prior was defined from this mask by applying a Gaussian filter to increase the size of the selected region. Since US detected interfaces, in phantoms it led to a single line for each tube [e.g.: the two short bright lines in Fig. 6a], and the prior for the inclusion regions did not have a circular shape in the X-Z direction but had the right width in the Y direction. Across slices, this procedure led to consistent prior size in the volume. To account for water, the phantom 2 top surface was identified from US signals, and mesh properties were set so that optical properties for that region were set to very low absorption; 5. the US structural priors thus identified were implemented as a soft prior partially accounting for segmentation errors. Equation 5 was used to update the optical properties when using prior information. The regularization matrix L now encodes the spatial prior information for image reconstruction. The detail of this approach may be found elsewhere;23 6. fluorescence field (ημaf) were reconstructed with and without the prior information for comparison.

Eq. 5

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \Delta \chi = [J^T J + \lambda L^T L]^{ - 1} J^T (\Phi ^{{\rm Meas}} - \Phi ^C). \end{equation}\end{document} Δχ=[JTJ+λLTL]1JT(ΦMeasΦC).
In Fig. 6, the US images and the fluorescence reconstruction of phantom 1 are shown. The coordinate and dimension of each image slice is shown in Fig. 5a. In Figs. 6a, 6d, 6g, the US image sections at different planes (x = 20, y = 14, and y = 32) are shown. In the second column of Figs. 6b, 6e, 6h, the reconstructed fluorescence images with prior information are shown for the different slices. Accordingly, in the third column of 6c, 6f, 6i, the reconstructed fluorescence images without any prior information are shown for the different slices. As shown in Fig. 6a, the width of the two tubes recovered by US is approximately 10 mm which is about 2 times larger than its real value. This can be explained by the use of a transducer having 0.5 in. diameter (about 2.7 times wider than the tubes) and the focal point was not well targeted at the inclusions. As shown in Figs. 6j, 6k, the fluorescence intensity normalized by the maximum intensity in Figs. 6e, 6h, respectively, along each dashed line decreases from right to left. This is in agreement with the fluorescence map shown in Fig. 5a and the varying dimension of the tube shown in Fig. 5c.

Fig. 6

Representative images of the acquisition using phantom 1. The US images (a), (d), (g), the fluorescence reconstruction images (ημfl in mm−1) with priors (b), (e), (h), and without priors (c), (f), (i) are shown for image slices at x = 20 (a)–(c), at y = 14 (d)–(f) and at y = 32 (g)–(i) respectively. Intensity plots (j) and (k), along the dashed line in Fig. 6 (e) and the dashed line in Fig. 6 (h) are shown, respectively.

126010_1_6.jpg

Figure 7 shows the fluorescence images overlaid on the US images. It confirms that the locations of the fluorescence inclusions may be accurately reconstructed and benefit from the co-registered US priors.

Fig. 7

(a) Overlaid image at x = 20. (b) Overlaid image at y = 14. (c) Overlaid image at y = 32.

126010_1_7.jpg

In Fig. 8, the US images and the fluorescence reconstruction of phantom 2 are shown. The coordinate and dimension of each image section may be referred to Fig. 5b. In the first column of Figs. 8a, 8d, the US image sections at different slices (x = 12, y = 18) are shown, respectively. In the second column of Figs. 8b, 8e, the reconstructed fluorescence images at slices x = 12, y = 18 with prior information are shown. Accordingly, in the third column of Figs. 6c, 6f, the reconstructed fluorescence images at slices x = 12, y = 18 without any prior information are shown. As shown in Fig. 8g, the fluorescence intensity normalized by the maximum intensity in Fig. 8e along the dashed line decreases from right to left. This is in agreement with the results found for phantom 1.

Fig. 8

Representative images using phantom 2. The US images (a) and (d), the fluorescence reconstruction images (ημfl in mm−1) with priors (b) and (e), and without priors (c) and (f) are shown for image slices at x = 12 (a)–(c) and at y = 18 (d)–(f). Intensity plot along the (g) dashed line in (e) is also shown.

126010_1_8.jpg

Figure 9 provides the fluorescence images overlaid on the US images confirming that the use of US priors improves fluorescence image reconstruction.

Fig. 9

(a) Overlaid image at x = 12. (b) Overlaid image at y = 18.

126010_1_9.jpg

3.3.

In Vivo Results

We further tested our system in an in vivo environment. As shown in Fig.10a, an Apo-E mouse of 23-weeks fed on a high cholesterol diet was imaged 20 h after intravenous administration of a molecular probe. We employed an Alexa-647–based probe to detect VCAM monocyte recruitment activity, which was reported to be a valuable biomarker and an early signal involved in the formation of atherosclerotic plaque and the inflammation process.38, 39, 40, 41, 42 VCAM is expected to be expressed in the aorta, heart valve, and heart. However, the 1 mm resolution acoustic scan was not precise enough to delineate the structure of the aorta. For our proof-of-concept study, we thus reconstructed the fluorescence emission from the heart area.43

Fig. 10

(a) The Born-normalization ratio overlaid with the picture. (b) Illustration of the animal manipulation.

126010_1_10.jpg

To couple the ultrasonic pulse echoes, we performed US and fluorescence imaging in warm water. As shown in Fig. 10b, the mouse was fit in a water container having a hole and connecting a tube used to deliver anaesthetic gas. A transparent plastic membrane was used in a similar fashion to phantoms with US gel used to couple the membrane to the body. The entire scan, including one absorption scan, one fluorescence scan, and a simultaneous US scan, was performed under 45 min in vivo. The ethics committees of Montreal Heart Institute and École Polytechnique de Montréal approved all animal manipulations.

Figure 10a shows the Born-normalized transmission ratios overlaid with the picture taken from the camera. As shown by the yellow outline in Fig 10a, a 31 mm×41 mm area has been scanned on the mouse. The experimental parameters for fluorescence were: 200 ms exposure time for each point; 200 KHz acquisition frequency; 1 mm scan steps in the X-Y direction; laser power for absorption/fluorescence measures was 30/50 mW, respectively. We collected the emitted fluorescence with a long-pass filter-HQ670LP (Chroma Technology). For US imaging, we used the transducer mentioned above to scan the same ROI with the same scan steps as the fluorescence subsystem did. In the experiment, the transducer surface was approximately 1.5 cm above the top surface of the mouse and we averaged 1000 times for each scanned point. For both fluorescence and US imaging, we detected the belly side of the mouse, which was close to the heart.

For 3D fluorescence image reconstruction of in vivo data: 1. a volume based on the scanned area was reconstructed; 2. a mesh of 1 mm resolution was built. Optical properties of the mesh were assigned μ a = 0.02 mm−1 and μ s = 1 mm−1 for the background, and μ a = 0.2 mm−1 and μ s = 1 mm−1 for the heart; 3. although the surface contour of the mouse was recovered by the US subsystem, for simplicity, we built a mesh having a rectangular parallelepiped shape; 4. for the reconstruction, we employed the dataset detected from the area denoted by the smaller square as shown in Fig. 10a, which would cover the fluorescence emitted from the heart of the mouse. We scaled the experimental Born-normalized ratios by the simulated excitation amplitudes and then used them as input to the forward model above; 5. we manually segmented the US image into a binary image (0: background, 1: heart) slice by slice. The heart area is illustrated by the dashed outline in Fig. 11a. The region reconstructed over the heart was relatively flat and the profile was not taken into account in this reconstruction; 6. the US prior constrained the reconstruction as soft a prior; 7. fluorescence field (ημfl) were reconstructed with and without the prior information for comparison.

Fig. 11

Representative images of Slice 3-1 of the mouse: (a) the US image shows the heart of the mouse; (b) the fluorescence reconstruction image (ημfl in mm−1) with priors and (c) without.

126010_1_11.jpg

In Fig. 11, a representative 2D fluorescence reconstruction image in the X-Z section and the correspondent US image slice of the mouse are shown. The coordinate and dimension of the image slice (y = 15, 25 mm in the x direction) are denoted by the dashed line (Slice 3-1) in Fig. 10a. In Fig. 11a, the 2D US image slice shows the heart of the mouse. However, the outline of the heart and the aorta in this US image is not very clear. This can be explained by: 1. the transducer having a fixed focal length was not well focused on the interested spot; 2. 1 mm resolution of motor motion was not enough for US imaging, especially a small object; 3. the biological fact of the mouse that the heart was partly under the rib cage posed a challenge for this application. In Figs. 11b, 11c, the reconstructed fluorescence images with and without prior are shown, respectively. The improvement of the fluorescence image with prior over the one without demonstrates that US imaging may benefit fluorescence imaging even in an in vivo environment. In Fig. 12, the overlaid fluorescence/US image shows that the location of the fluorescence may be accurately reconstructed and benefit from the co-registered US priors.

Fig. 12

The overlaid image of Slice 3-1.

126010_1_12.jpg

3.4.

Analysis of the Results

Phantom results demonstrate that the US subsystem is able to recover the boundary and the inclusions of the phantoms, which provides a strategy to explore structural priors for fluorescence reconstruction. Furthermore, the US priors significantly improved fluorescence reconstruction quality. To quantify the results, we compared the contrast to noise ratio (CNR) to evaluate the performance of the reconstruction with priors. Herein, we define that CNR = (S A-S B )/σ, where S A and S B are the mean intensities of the ROI and background, respectively, and σ is the standard deviation of the background. Table 2 provides a resume of the results for both CNR1 and CNR2, which is the CNR of the reconstructions with and without priors, respectively, showing that the use of priors resulted in CNRs 4 to 20 times higher than the ones without. This advantage is further confirmed by our in vivo experiment, which shows that the image using US prior resulted in CNR 4.79 times higher than the one without.

Table 2

CNR of the reconstruction images. CNR1 and CNR2 represent the CNR of the reconstructions with prior and the ones without priors, respectively.

Image sectionCoordinateCNR1/CNR2
Phantom 1Slice 1–1x = 205.78/1.31
Slice 1–2y = 144.33/0.28
Slice 1–3y = 325.18/0.63
Phantom 2Slice 2–1x = 127.09 /1.41
Slice 2–2y = 183.29/0.16
MouseSlice 3-1y = 153.26/0.68

Herein, to evaluate quantification with the phantoms, we compared the normalized maximum values of ημaf in the images Figs. 6e, 6h, 8e denoted in Fig. 13 as A, B, and C, respectively. The same fluorophore concentration was used in both phantoms and different depths, and the value of ημfl should be the same once reconstructed. In phantom 1, but at different depths, an error was found to be small, ∼4%. When comparing different phantoms (different optical properties and geometry), the error was ∼14%. This could be explained by that the phantom 2 has a smaller μ a and μ s , and the expanded detection area caused inaccuracy in reconstruction.

Fig. 13

Quantification with the two phantoms by comparing the normalized maximum value of ημfl in each fluorescence image slice.

126010_1_13.jpg

4.

Discussion

In this paper we have presented a combined fluorescence/US imaging system. The fluorescence tomography subsystem was used to explore 3D fluorescence emission; the US subsystem was used to detect 3D interface of both the surface and the inclusion (e.g., fluorescent tube or the heart of the mouse) of the object, which could provide a structural image and impose constraints for fluorescence reconstruction. The performance of this system was quantified using two phantoms having different shapes, constitutions, and dimensions. Phantom results showed that the fluorescence reconstruction image quality could be significantly improved using the US structural priors. Also, the US images could help to interpret the reconstructed functional images at different sections. As a proof-of-concept study, we further tested the system by imaging VCAM activity in a model of atherosclerosis. In vivo results indicated that this system has the potential to be applied in in vivo molecular imaging study.

Compared with previous studies, our system has achieved 3D imaging for both fluorescence and US imaging. Three-dimensional US imaging is expected to provide richer structural prior information than a 2D US detector array did,31 and the raster-scanned 3D US data sampling available in this system enabled the delineation of structural priors by segmenting the US image rather than estimating the size of inclusion by two orthogonal image-slices.30 We thus expect our system to not be limited to inclusions with regular shapes. As evidence, phantoms results also reconstructed the shapes of the fluorescent tubes having a decreasing diameter; and in vivo results indicated that this system could record anatomical and functional images in small animals. The scanning configuration proposed here was automatically co-registered which further facilitated dual modality analyses.

Furthermore, in comparison to reflection mode, which is limited by detection depth in diffusive media,5 fluorescence imaging in transmission mode has better sensitivity and detection depth. Illumination with a collimated laser beam is expected to be less affected by nonspecific signal contamination than a broad beam system would.1 Combining all the advantages mentioned above, the work presented in this paper exhibits a promising strategy for exploring anatomical and functional information simultaneously at very low-cost (less than $9k).

The simplicity of this system brought the following main drawbacks. 1. The limited view by scanning a single source-detector pair achieved less information than a camera-based system would. 2. Raster-scanned point source imaging meant longer acquisition time compared with a wide illumination camera-based configuration. With the experimental parameters mentioned above, 1196 points, 1064 points, and 1344 points of measurement were collected for phantom 1, phantom 2, and the Apo-E mouse, respectively. For the in vivo experiment, the acquisition including an absorption scan, a fluorescence scan, and an ultrasonic scan was finished within 45 min. However, this time may vary depending on the dimension of field of view and scan steps. 3. US imaging has limitations for this application. In particular US images are difficult to segment, which poses a challenge when trying to gather a precise atlas for the whole body of small animals. These difficulties were present in our experiments while trying to image over of the heart of the mouse since the heart, located partly under rib cage, blurred the US image in some sections. 4. US segmentation in some situations may lead to wrong priors due to these issues. The soft prior used here was, however, shown in other studies21 to be more immune to the prior uncertainty.

This dual modality approach might be further improved by simple modifications while preserving the low-cost concept. 1. We employed a single transillumination channel to collect the fluorescent photons. Adding a source channel to explore reflective emission could further enhance the precision for quantification of fluorescence images. 2. The method used to couple ultrasonic pulse-echoes added difficulties in conducting experiments due to the necessity of using a membrane. When manipulating animals, an overlay of the plastic membrane on the animal to separate it from water could be a drawback. A potential solution is to detect ultrasonic signals from the bottom of the object. In this way, the ultrasonic transducer will still be scanned in water with an object located above water. 3. Further improvements to optimize the scanning: a translation stage to adjust the focusing of the ultrasonic transducer may improve the longitudinal resolution; stepping motors having better resolution and higher velocity can be employed to increase the horizontal resolution of the US image and speed up the scanning; finally, a portable projector can be used in conjunction with the camera to measure the profile of the object quickly. In this proof of concept work we used a simple threshold to implement spatial priors but improved algorithms can be developed for US image processing and segmentation.

5.

Conclusion

Although US imaging provides limited structural information compared to that of MRI or x-ray CT, the benefits of fluorescence reconstruction are still significant. To be noticed, the multispectral feature of this system has not been fully used yet. Therefore, it is expected that the reconstruction quality may be further improved if we add multispectral measures to image reconstruction.12 Finally, the co-registration of both imaging modalities may facilitate the understanding of the images by investigators. Future works include optimizing both hardware and algorithm of this system and cardiovascular disease study with small animals by molecular imaging offered by this proposed system.

Acknowledgments

The authors would like to thank Professor Frederic Leblond for assistance in NIRFAST. We also thank Nicolas Ouakli for making the semicylindrical phantom. This work has been funded by a NSERC Discovery grant to F. Lesage. B. Li is funded by China Scholarship Council (CSC).

References

1. 

F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B, 98 (1), 77 –94 (2010). https://doi.org/10.1016/j.jphotobiol.2009.11.007 Google Scholar

2. 

A. P. Gibson, J. C. Hebden, and S. R. Arridge, “Recent advances in diffuse optical imaging,” Phys. Med. Biol., 50 (4), R1 –R43 (2005). https://doi.org/10.1088/0031-9155/50/4/R01 Google Scholar

3. 

A. Soubret, J. Ripoll, and V. Ntziachristos, “Accuracy of fluorescent tomography in the presence of heterogeneities: Study of thenormalized born ratio,” IEEE Trans. Med. Imaging, 24 (10), 1377 –1386 (2005). https://doi.org/10.1109/TMI.2005.857213 Google Scholar

4. 

B. W. Pogue, S. L. Gibbs, B. Chen, and M. Savellano, “Fluorescence imaging in vivo: raster scanned point-source imaging provides more accurate quantification than broad beam geometries,” Technol. Cancer Res. Treat., 3 (1), 15 –21 (2004). Google Scholar

5. 

S. Bjorn, V. Ntziachristos, and R. Schulz, “Mesoscopic epifluorescence tomography: Reconstruction of superficial and deep fluorescence in highly-scattering media,” Opt. Express, 18 (8), 8422 –8429 (2010). https://doi.org/10.1364/OE.18.008422 Google Scholar

6. 

X. Montet, J. Figueiredo, H. Alencar, V. Ntziachristos, U. Mahmood, and R. Weissleder, “Tomographic Fluorescence Imaging of Tumor Vascular Volume in Mice1,” Radiology, 242 (3), 751 –758 (2007). https://doi.org/10.1148/radiol.2423052065 Google Scholar

7. 

V. Ntziachristos and R. Weissleder, “Charge-coupled-device based scanner for tomography of fluorescent near-infrared probes in turbid media,” Med. Phys., 29 (5), 803 –809 (2002). https://doi.org/10.1118/1.1470209 Google Scholar

8. 

S. Patwardhan, S. Bloch, S. Achilefu, and J. Culver, “Time-dependent whole-body fluorescence tomography of probe bio-distributions in mice,” Opt. Express, 13 (7), 2564 –2577 (2005). https://doi.org/10.1364/OPEX.13.002564 Google Scholar

9. 

S. Leavesley, Y. Jiang, V. Patsekin, B. Rajwa, and J. P. Robinson, “An excitation wavelength–scanning spectral imaging system for preclinical imaging,” Rev. Sci. Instrum., 79 (2), 023707 (2008). https://doi.org/10.1063/1.2885043 Google Scholar

10. 

E. E. Graves, J. Ripoll, R. Weissleder, and V. Ntziachristos, “A submillimeter resolution fluorescence molecular imaging system for small animal imaging,” Med. Phys., 30 (5), 901 –911 (2003). https://doi.org/10.1118/1.1568977 Google Scholar

11. 

C. D’Andrea, L. Spinelli, D. Comelli, G. Valentini, and R. Cubeddu, “Localization and quantification of fluorescent inclusions embedded in a turbid medium,” Phys. Med. Biol., 50 (10), 2313 –2327 (2005). https://doi.org/10.1088/0031-9155/50/10/009 Google Scholar

12. 

G. Zavattini, S. Vecchi, G. Mitchell, U. Weisser, R. M. Leahy, B. J. Pichler, D. J. Smith, and S. R. Cherry, “A hyperspectral fluorescence system for 3D in vivo optical imaging,” Phys. Med. Biol., 51 (8), 2029 –2043 (2006). https://doi.org/10.1088/0031-9155/51/8/005 Google Scholar

13. 

A. Kumar, S. Raymond, A. Dunn, B. Bacskai, and D. Boas, “A Time Domain Fluorescence Tomography System for Small Animal Imaging,” IEEE Trans. Med. Imag., 27 (8), 1152 –1163 (2008). https://doi.org/10.1109/TMI.2008.918341 Google Scholar

14. 

Q. Fang, R. H. Moore, D. B. Kopans, and D. A. Boas, “Compositional-prior-guided image reconstruction algorithm for multi-modality imaging,” Biomed. Opt. Express, 1 (1), 223 –235 (2010). https://doi.org/10.1364/BOE.1.000223 Google Scholar

15. 

X. Intes, C. Maloux, M. Guven, B. Yazici, and B. Chance, “Diffuse optical tomography with physiological and spatial a priori constraints,” Phys. Med. Biol., 49 (12), N155 –N163 (2004). https://doi.org/10.1088/0031-9155/49/12/N01 Google Scholar

16. 

Y. Lin, H. Gao, O. Nalcioglu, and G. Gulsen, “Fluorescence diffuse optical tomography with functional and anatomical a priori information: feasibility study,” Phys. Med. Biol., 52 (18), 5569 –5585 (2007). https://doi.org/10.1088/0031-9155/52/18/007 Google Scholar

17. 

Y. Tan and H. Jiang, “Diffuse optical tomography guided quantitative fluorescence molecular tomography,” Appl. Opt., 47 (12), 2011 –2016 (2008). https://doi.org/10.1364/AO.47.002011 Google Scholar

18. 

M. Guven, B. Yazici, X. Intes, and B. Chance, “Diffuse optical tomography with a priori anatomical information,” Phys. Med. Biol., 50 (12), 2837 –2858 (2005). https://doi.org/10.1088/0031-9155/50/12/008 Google Scholar

19. 

S. C. Davis, H. Dehghani, J. Wang, S. Jiang, B. W. Pogue, and K. D. Paulsen, “Image-guided diffuse optical fluorescence tomography implemented with Laplacian-type regularization,” Opt. Express, 15 (7), 4066 –4082 (2007). https://doi.org/10.1364/OE.15.004066 Google Scholar

20. 

B. Brooksby, H. Dehghani, B. Pogue, and K. Paulsen, “Near-infrared (NIR) tomography breast image reconstruction with a priori structural information from MRI: algorithm development for reconstructing heterogeneities,” IEEE J. Sel. Top. Quantum Electron., 9 (2), 199 –209 (2003). https://doi.org/10.1109/JSTQE.2003.813304 Google Scholar

21. 

P. K. Yalavarthy, B. W. Pogue, H. Dehghani, C. M. Carpenter, S. Jiang, and K. D. Paulsen, “Structural information within regularization matrices improves near infrared diffuse optical tomography,” Opt. Express, 15 (13), 8043 –8058 (2007). https://doi.org/10.1364/OE.15.008043 Google Scholar

22. 

S. C. Davis, B. W. Pogue, R. Springett, C. Leussler, P. Mazurkewitz, S. B. Tuttle, S. L. Gibbs-Strauss, S. S. Jiang, H. Dehghani, and K. D. Paulsen, “Magnetic resonance–coupled fluorescence tomography scanner for molecular imaging of tissue,” Rev. Sci. Instrum., 79 (6), 064302 (2008). https://doi.org/10.1063/1.2919131 Google Scholar

23. 

P. K. Yalavarthy, B. W. Pogue, H. Dehghani, and K. D. Paulsen, “Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography,” Med. Phys., 34 (6), 2085 –2098 (2007). https://doi.org/10.1118/1.2733803 Google Scholar

24. 

D. Kepshire, N. Mincu, M. Hutchins, J. Gruber, H. Dehghani, J. Hypnarowski, F. Leblond, M. Khayat, and B. W. Pogue, “A microcomputed tomography guided fluorescence tomography system for small animal molecular imaging,” Rev. Sci. Instrum., 80 (4), 043701 (2009). https://doi.org/10.1063/1.3109903 Google Scholar

25. 

A. Ale, R. B. Schulz, A. Sarantopoulos, and V. Ntziachristos, “Imaging performance of a hybrid x-ray computed tomography-fluorescence molecular tomography system using priors,” Med. Phys., 37 (5), 1976 –1986 (2010). https://doi.org/10.1118/1.3368603 Google Scholar

26. 

Y. Lin, W. C. Barber, J. S. Iwanczyk, W. Roeck, O. Nalcioglu, and G. Gulsen, “Quantitative fluorescence tomography using a combined tri-modality FT/DOT/XCT system,” Opt. Express, 18 (8), 7835 –7850 (2010). https://doi.org/10.1364/OE.18.007835 Google Scholar

27. 

R. B. Schulz, A. Ale, A. Sarantopoulos, M. Freyer, R. Söhngen, M. Zientkowska, and V. Ntziachristos, “Hybrid fluorescence tomography/x-ray tomography improves reconstruction quality,” Proc. SPIE, 7370 73700H (2009). https://doi.org/10.1117/12.831714 Google Scholar

28. 

Y. Lin, M. T. Ghijsen, H. Gao, N. Liu, O. Nalcioglu, and G. Gulsen, “A photo-multiplier tube-based hybrid MRI and frequency domain fluorescence tomography system for small animal imaging,” Phys. Med. Biol., 56 (15), 4731 –4747 (2011). https://doi.org/10.1088/0031-9155/56/15/007 Google Scholar

29. 

C. Snyder, S. Kaushal, Y. Kono, H. Tran Cao, R. Hoffman, and M. Bouvet, “Complementarity of ultrasound and fluorescence imaging in an orthotopic mouse model of pancreatic cancer,” BMC Cancer, 9 (1), 106 (2009). https://doi.org/10.1186/1471-2407-9-106 Google Scholar

30. 

Q. Zhu, M. Huang, N.-G. Chen, K. Zarfos, B. Jagjivan, M. Kane, P. Hedge, and H. S. Kurtzman, “Ultrasound-guided optical tomographic imaging of malignant and benign breast lesions: Initial clinical results of 19 cases,” Neoplasia, 5 (5), 379 –388 (2003). Google Scholar

31. 

Q. Zhu, T. Durduran, V. Ntziachristos, M. Holboke, and A. G. Yodh, “Imager that combines near-infrared diffusive light and ultrasound,” Opt. Lett., 24 (15), 1050 –1052 (1999). https://doi.org/10.1364/OL.24.001050 Google Scholar

32. 

M. J. Holboke, B. J. Tromberg, X. Li, N. Shah, J. Fishkin, D. Kidney, J. Butler, B. Chance, and A. G. Yodh, “Three-dimensional diffuse optical mammography with ultrasound localization in a human subject,” J. Biomed. Opt., 5 (2), 237 –247 (2000). https://doi.org/10.1117/1.429992 Google Scholar

33. 

J. D. Gruber, A. Paliwal, V. Krishnaswamy, H. Ghadyani, M. Jermyn, J. A. O’Hara, S. C. Davis, J. S. Kerley-Hamilton, N. W. Shworak, E. V. Maytin, T. Hasan, and B. W. Pogue, “System development for high frequency ultrasound-guided fluorescence quantification of skin layers,” J. Biomed. Opt., 15 (2), 026028 (2010). https://doi.org/10.1117/1.3374040 Google Scholar

34. 

Y. Nakashima, A. Plump, E. Raines, J. Breslow, and R. Ross, “ApoE-deficient mice develop lesions of all phases of atherosclerosis throughout the arterial tree,” Arterioscler., Thromb., Vasc. Biol., 14 133 –140 (1994). https://doi.org/10.1161/01.ATV.14.1.133 Google Scholar

35. 

A. B. Milstein, J. J. Stott, S. Oh, D. A. Boas, R. P. Millane, C. A. Bouman, and K. J. Webb, “Fluorescence optical diffusion tomography using multiple-frequency data,” J. Opt. Soc. Am. A, 21 (6), 1035 –1049 (2004). https://doi.org/10.1364/JOSAA.21.001035 Google Scholar

36. 

H. Dehghani, M. E. Eames, P. K. Yalavarthy, S. C. Davis, S. Srinivasan, C. M. Carpenter, B. W. Pogue, and K. D. Paulsen, “Near infrared optical tomography using NIRFAST: Algorithm for numerical model and image reconstruction,” Commun. Numer. Methods Eng., 25 (6), 711 –732 (2009). https://doi.org/10.1002/cnm.1162 Google Scholar

37. 

V. Ntziachristos and R. Weissleder, “Experimental three-dimensional fluorescence reconstruction of diffuse media by use of a normalized Born approximation,” Opt. Lett., 26 (12), 893 –895 (2001). https://doi.org/10.1364/OL.26.000893 Google Scholar

38. 

F. A. Jaffer, P. Libby, and R. Weissleder, “Molecular imaging of cardiovascular disease,” Circulation, 116 (9), 1052 –1061 (2007). https://doi.org/10.1161/CIRCULATIONAHA.106.647164 Google Scholar

39. 

J. Sanz and Z. A. Fayad, “Imaging of atherosclerotic cardiovascular disease,” Nature, 451 (7181), 953 –957 (2008). https://doi.org/10.1038/nature06803 Google Scholar

40. 

D. J. Rader and A. Daugherty, “Translating molecular discoveries into new therapies for atherosclerosis,” Nature, 451 (7181), 904 –913 (2008). https://doi.org/10.1038/nature06796 Google Scholar

41. 

F. A. Jaffer, P. Libby, and R. Weissleder, “Optical and multimodality molecular imaging. Insights into atherosclerosis,” Arterioscler., Thromb., Vasc. Biol., 29 1017 –1024 (2009). https://doi.org/10.1161/ATVBAHA.108.165530 Google Scholar

42. 

U. Prahl, P. Holdfeldt, G. Bergström, B. Fagerberg, J. Hulthe, and T. Gustavsson, “Percentage white: A new feature for ultrasound classification of plaque echogenicity in carotid artery atherosclerosis,” Ultrasound Med. Biol., 36 (2), 218 –226 (2010). https://doi.org/10.1016/j.ultrasmedbio.2009.10.002 Google Scholar

43. 

J. Tardif, F. Lesage, F. Harel, P. Romeo, and J. Pressacco, “Imaging biomarkers in atherosclerosis trials,” Circulation: Cardiovascular Imaging, 4 (3), 319 –333 (2011). https://doi.org/10.1161/CIRCIMAGING.110.962001 Google Scholar
© 2011 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2011/16(12)/126010/10/$25.00
Baoqiang Li, Maxime Abran, Carl Matteau-Pelletier, Leonie Rouleau, Frédéric Lesage, Eric Rheaume, Jean-Claude Tardif, Tina Lam, Rishi Sharma, and Ashok Kakkar "Low-cost three-dimensional imaging system combining fluorescence and ultrasound," Journal of Biomedical Optics 16(12), 126010 (1 December 2011). https://doi.org/10.1117/1.3662455
Published: 1 December 2011
Lens.org Logo
CITATIONS
Cited by 19 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Luminescence

Imaging systems

Image segmentation

3D image processing

Heart

In vivo imaging

Optical properties

RELATED CONTENT


Back to Top