Open Access
3 November 2014 In vivo near-infrared fluorescence three-dimensional positioning system with binocular stereovision
Bofan Song, Wei Jin, Ying Wang, Qinhan Jin, Ying Mu
Author Affiliations +
Abstract
Fluorescence is a powerful tool for in-vivo imaging in living animals. The traditional in-vivo fluorescence imaging equipment is based on single-view two-dimensional imaging systems. However, they cannot meet the needs for accurate positioning during modern scientific research. A near-infrared in-vivo fluorescence imaging system is demonstrated, which has the capability of deep source signal detecting and three-dimensional positioning. A three-dimensional coordinates computing (TDCP) method including a preprocess algorithm is presented based on binocular stereo vision theory, to figure out the solution for diffusive nature of light in tissue and the emission spectra overlap of fluorescent labels. This algorithm is validated to be efficient to extract targets from multispectral images and determine the spot center of biological interests. Further data analysis indicates that this TDCP method could be used in three-dimensional positioning of the fluorescent target in small animals. The study also suggests that the combination of a large power laser and deep cooling charge-coupled device will provide an attractive approach for fluorescent detection from deep sources. This work demonstrates the potential of binocular stereo vision theory for three-dimensional positioning for living animal in-vivo imaging.

1.

Introduction

In-vivo optical sensing technique has been proved to be an attractive technique.13 It has the capability of performing quantitative and qualitative studies on a cellular or molecular level.46 Fluorescence is a versatile and useful tool for living animal in-vivo imaging.7 Fluorescence is the emission of light from an excitation source. Fluorescence occurs when an orbital electron of a molecule relaxes to its ground state by emitting a photon of light after being excited to a higher quantum state by some type of energy.8 The advantage of in-vivo optical imaging is cost-effective and easy to operate.9 The photon absorption rate of most biological tissues is comparatively low in the near infrared (NI) spectral range (650–900 nm), and photons can be detected through living organs.10 So far, many near-infrared range (NIR) fluorescent probes have been developed (such as quantum dots) for in-vivo imaging studies.1115

A single-view two-dimensional (2-D) imaging system is the mainstream of in-vivo fluorescence imaging equipment.1619 However, they cannot meet the needs for accurate positioning during the studies. A three-dimensional (3-D) positioning can provide more accurate target location information,20 which shows a great significance for biological and medical applications. Researchers can observe the target spatial position, analyze the target metastasis, and determine the relative positional relationship between the target and the organs. In this paper, a NI in-vivo fluorescence imaging system is demonstrated, and a method for computing the 3-D coordinates of targets based on binocular stereovision theory is presented.

Generally, binocular stereo techniques can be divided into the following parts: image acquisition, camera calibration, feature extraction, image matching and 3-D positioning.21,22 It has been applied into the fields of computer vision,23 photogrammetry,24,25 and experimental mechanic.26,27 In fluorescence imaging in-vivo, the diffusive nature of light in tissue28,29 and the emission spectra overlap of fluorescent labels pose great challenges for 3-D positioning using binocular stereovision theory. Thus, an algorithm is necessary to extract targets from the multispectral image (MSI) and determine the spot center of biological interest.

Hu et al.30 proposed a novel technique that combined with binocular stereovision and fluorescent imaging for 3-D surface profilometry and deformation measurement, which enable the noncontact, full-field measurements in biotissue and biomaterial at the microscale. In this paper, the similar technique is applied, and furthermore, a novel system is developed for in-vivo 3-D positioning. A rotary platform is used instead of two cameras to reduce the dimension and cost of the system. The novel system can detect fluorescent signals from deep sources instead of surface information. In addition, the novel system presented is powerful that can perform in-vivo imaging for NI applications.

The fluorescence sensing system includes a high-intensity, narrow-bandwidth excitation source, and high-sensitive photon detection. The relatively low quantum yield is the key limitation for penetration and use of the in-vivo optical sensing technique. In this paper, a large power diode pump solid state laser (671 nm, 2 W) with short pulse duration (10nm) is used as the excitation source. A deep cooling charge-coupled device (CCD) with large pixel size (minimum 100°C, pixel size 13μm) is used to detect the weak fluorescent signals at NI spectral range (quantum efficiency (QE) is >90% around NIR). This system is elaborately optimized and is capable of detecting weak fluorescent signals of deep targets with high sensitivity.

In this paper, a NI in-vivo fluorescence imaging system and 3-D positioning algorithm based on binocular stereovision theory are presented. Experiments are designed to demonstrate the validity of the algorithm. The results and further data analysis prove that this method could be used in 3-D positioning of the target in the small animal. This system also shows the ability to detect fluorescent signals from deep sources.

2.

Methods

2.1.

Experimental Setup

The in-vivo fluorescence sensing system consists of a darkroom, a high-energy diode pump solid-state laser, a liquid-core fiber that can divide one beam of light to four, emission filter wheel, rotary platform module, a deep cooling high-sensitive CCD, and a computer in which 3-D positioning software runs on (see Fig. 1).

Fig. 1

A schematic of the in vivo near-infrared fluorescence three-dimensional positioning system.

JBO_19_11_116002_f001.png

Figure 1 illustrates the schematics and key components of the in-vivo fluorescence sensing system. The darkroom provides light-confined environment, and external light is shielded. High-energy laser works as an excitation source to illuminate the small animal uniformly through liquid-core fiber. A series of LEDs are utilized for bright field illumination. Deep cooling CCD, focusing lens, and emission filter wheel are used to acquire high-sensitive multispectral image data at NIR.

The block diagram of the in-vivo fluorescence sensing system is shown in Fig. 2. Liquid-core fiber is coupled with laser output. Four outputs of the liquid-core fiber are located above the small animal to illuminate it uniformly from four directions. The axis of the optical imaging path intersects the center of the rotating platform with a 45-deg angle. With the assistance of anesthesia apparatus, small animal is very stable while rotating on the accurate rotary platform. High-sensitive deep cooling CCD can reduce the dark current noise greatly, with large pixel size and high QE around 650–900 nm, and it is amenable to acquire very weak fluorescent signal at NIR.

Fig. 2

Block diagram of the system.

JBO_19_11_116002_f002.png

The excitation laser source is a large power diode pump solid-state laser (MRL-N, 2W, New Industries Optoelectronics Technology Co., Ltd. Changchun, China) emitting at 671 nm, with short pulse duration (10nm), instead of an xenon lamp, for the xenon lamp will require long warm-up time and a 10%-50% changes of light intensity in working conditions.31 In the designed system, a uniform excitation light for small animals is necessary, and a solid state laser source only has 0.1% light intensity changes.32 The laser beam is transmitted via a liquid-core fiber (numerical aperture: 0.52). Four outputs of the liquid-core fiber are located above the small animal to illuminate the small animal uniformly from four directions. Liquid-core optical fiber has high numerical aperture,33 which is an important parameter for optical fiber for large numerical aperture that represents high-coupling efficiency.34,35 Liquid-core optical fiber also has good flexibility and reliability.36

The optical detection unit consists of a deep cooling CCD, lens, and filter wheel. The fluorescence light transmits through lens, filters sequentially, and is detected by CCD. The central element of the detection unit is a 1024×1024 deep cooling 16-bit CCD camera (DU-934NBRD, Andor Technology, Belfast, Ireland). The camera is designed for NI applications.37 It has very high QE at NIR (>90%). QE is a measurement of a device’s electrical sensitivity to light at each photon energy level.38 High QE means more incident photons to converted electrons.39 This camera can detect extremely weak NI fluorescence signals. With deep cooling features, the read out noise is extremely low [2.5e (50 kHz)]. Dark current is one of the main sources for noise in image sensors such as CCDs; these can be greatly reduced by deep cooling [0.008e/pixel/s (100°C)].40 In the system, the filter is placed between the lens and CCD. A customized focusing lens is fabricated to meet the demands, and the focal length and field of view can be adjusted. The filter wheel is fully enclosed, light-tight. A total of 12 filter holes, 11 band-pass filters, and an all-pass piece (K9 glass) are installed. The K9 glass is used in the bright field illumination. The center wavelengths of the filters are distributed evenly in the range of “NI windows.” The filters were purchased from China-Quantum Co., Ltd., Changchun, China. The full-width half maximum of the filters is around 10 nm (9.2 nm minimum to 11.6 maximum). Filters currently in use are 681, 702, 711, 759, 780, 794, 808, 850, 880, 903, and 938 nm. The CCD is connected to a common personal computer via USB 2.0, and image acquisition was done with the software Andor SOLIS 4.7.3 provided by the manufacturer of the CCD.

The darkroom provides a light-tight experiment environment and fixes components of the imaging system. The inner surface is coated with black extinction paint. The reflectivity is <0.5%. The possibility of light leakages from the outside such as fiber, anesthetic gas transmission pipes, wires, etc., is greatly reduced by double sealed design. When capturing images using this system, the first step is to open the LEDs, adjust the field of view, and calibrate the CCD camera. The fluorescent-labeled small animal is placed on the rotary platform. Under the LED bright field illumination, the all-pass filter will capture an image of the small animal. After that, the LEDs are closed, and the laser is opened; the filters and rotary platform are changed in order to obtain multispectral fluorescence images from different perspectives. Images are subsequently processed and analyzed with customized software running on the computer.

2.2.

Target Extraction Based on Multispectral Imaging

In NI fluorescence in-vivo imaging, the target is captured as a large light spot. Therefore, the center of the target spot needs to be determined for 3-D positioning.

2.2.1.

Multispectral imaging

By switching different filters, the spectroscopic information is obtained in image form.41 With different wavelengths, the entire emission for every pixel of the entire image in the field of view can be recorded. The MSI provides a “data cube” of spectral information of the entire image at each wavelength of interest.42 It has the capability to acquire spatial and spectral information of the entire small animal at NI range.43 Comparing the spectral differences between different fluorescent probes and background signals, target spots can be extracted. Besides 2-D information, MSI also provides the spectral dimension as z-axis.44 As emission information at each specific wavelength are plotted on one coordination space, a multispectral data cube is formed. In the data cube, f(x,y,z) represents the gray value of the pixel corresponding to the coordinates (x,y,z). The normalized value of the pixel’s grayscale is

Eq. (1)

F(x,y,z)=f(x,y,z)fmin(x,y,z)fmax(x,y,z)fmin(x,y,z).

Spectral curve is reconstructed by plotting f (x,y,z) at each wavelength. Each image contains both spatial (x, y coordinate data) and spectral (z-intensity data) information. fmin(x,y,z) and fmax(x,y,z) represent the minimum and maximum of the sample data, respectively. Getting spectral curve is the most intuitive way of expressing spectral features. The cubic spline interpolation method is introduced to reconstruct the curve.

2.2.2.

Multispectral unmixing

When capturing in-vivo fluorescent images, the sensitivity is restricted by autofluorescence from skin and organ. This limitation makes it difficult to locate the fluorophores of interest accurately. At the same time, it is necessary to monitor a variety of biological processes simultaneously. It is necessary to use multiple fluorescent probes to label different molecules. Multispectral unmixing can remove autofluorescence and separate multiple fluorophores of interest. The unmixing algorithm assumes that the fluorescence spectrum measured is a stack up of several pure spectra multiplied by a weighting factor. The weighing factor is determined by the local concentration, excitation efficiency, and relative luminance of the emission fluorescence. The linear model is described as below:

Eq. (2)

M=SA+R,
where M represents the measured spectra data matrix for each pixel (m columns: m pixels; n rows: n wavelengths). S represents the pure spectra data matrix of each fluorophores of interest and autofluorescence (k columns: k fluorophores; n rows: n wavelengths). A represents the weighing factor matrix (m columns: m pixels; k rows: k fluorophores). M is the measured multispectral data. R represents the matrix error. To solve this linear equation, the following conditions must be satisfied. The spectral detection channels must be greater than the number of fluorophores in the sample. Autofluoresence must also be calculated as a fluorophore. The pure spectra of fluorophores need to be obtained beforehand. This equation can be calculated by least squares method.

Eq. (3)

e2=MAS2.

The pure spectra of fluorophores can be obtained from the spectral library directly or extracted from multispectral fluorescence images.

2.2.3.

Target spot center determination

The mathematical model of fluorescent spot energy distribution is as below:

Eq. (4)

E(x,y)=Emaxexp[(xxc)22ax2(yyc)22ay2],
where Emax is the maximum light intensity, (xc,yc) is the coordinates of the fluorescent spot energy center, and ax, ay are the major and minor axes of spot intensity distribution. The image grayscale distribution function of the light spot on the camera imaging plane is

Eq. (5)

E(x,y)=Emaxexp[(sh1pxc)22ax2(sh2pyc)22ay2],
where h1, h2 are the row vectors of H1, H=MR. M is internal matrix of camera. R is the rotation matrix. P is the homogeneous coordinates of spot center. The pixel at the imaging plane corresponding to the energy center of the spot is the maxima point of the grayscale distribution function.

At gray image I(i,j), the gray weighted center (x0,y0) is

Eq. (6)

x0=(i,j)SiW(i,j)(i,j)SW(i,j)y0=(i,j)SjW(i,j)(i,j)SW(i,j),
where W(i,j) is the weights, W(i,j)=I(i,j) in general. The method chooses gray gravity method to process the fluorescence spot after multispectral unmixing.

2.3.

3-D Positioning Based on Binocular Stereovision Theory

Vision measurement system is based on the image information acquired by the camera to calculate the 3-D position of the target. The paper presents a method to obtain 3-D coordinates of the fluorescence targets based on binocular stereovision theory.

2.3.1.

Camera calibration

The correspondence between the 3-D spatial position of a point and its 2-D plane position is determined by the geometry of the camera. The parameters of the geometric model are called the camera parameters. Camera calibration is the process of calculating camera parameters by experiments that determines the internal and external parameters of the camera. In this paper, OpenCV functions45 are used to calculate internal parameters matrix M based on Zhang.46 In the stereo vision system, it is necessary to access the relative position of the two cameras (or a single camera at different perspectives). The distortion parameter D is solved by Brown’s method.47

2.3.2.

Binocular stereovision

The basic principle of stereo vision is used to observe the same target from two (or more) points of view. Images are acquired under different perspectives. The 3-D information about the target is solved via the triangulation principle. In this study, the small animal is placed on the rotary platform, and the fixed CCD camera takes photos on different rotary angles. Since the rotary platform and the CCD camera move simultaneously, the process is also under view as the CCD camera rotating around the small animal, while the small animal is fixed. Thus, the equivalent convergent stereoscopic model can be obtained. It can be converted to parallel stereoscopic models for further computing.48 The modules of the rotary platform, the equivalent convergent stereoscopic, and the parallel binocular stereovision are shown in Fig. 3.

Fig. 3

Imaging module using rotary platform (a), the equivalent convergent stereoscopic model is shown in (b) and the parallel binocular stereovision model (c).

JBO_19_11_116002_f003.png

2.3.3.

3-D positioning

Epipolar constraint has an important role in stereo matching algorithm. It specifies the stereo matching process and affects the efficiency greatly. For the corrected images, the calculated coordinates of the target spot center should have the same v coordinates. After solving the reprojection matrix Q, the 3-D coordinates of the targets can be calculated by obtaining the 2-D coordinates (u,v) on imaging plane and the associated disparity d.

Eq. (7)

[XYZW]=Q[u1v1+v22u2u11].

The 3-D coordinates of the target is (X/W, Y/W, Z/W) under the left camera coordination system. When the center coordinates of the target spot at left camera image is (u1,v1), and the coordinates at right camera image is (u2,v2), the disparity d is d=u2u1.

3.

Experimental Results

3.1.

Camera Calibration

The proper calibration checkerboard is printed according to the size of the field of view. In the bright field, the checkerboard is placed on the rotary platform at different angles and positions. After fixing, a set of images is captured with an angle of 0 and 4 deg, ensuring that there is no relative movement between checkerboard and the rotary platform. Open CV functions are used to calculate the internal parameters matrix M and the distortion parameter D. Obtain the rotation matrix R and the translation vector T. The reprojection matrix Q for 3-D location also needs to be calculated.

3.2.

Target Extraction

To test the performance of our target extraction algorithm based on multispectral imaging, three different quantum dots with emission spectrum 720, 770, and 840 nm are chosen. The quantum dots are hypodermically injected into mice at three different positions. All three positions are around the dorsum of the mice. The fluorescent images are captured by designed in-vivo fluorescent sensing system. Image data are shown in Fig. 4 with the methods described above for fluorescent signal unmixing and extraction. The result is shown in Fig. 5. For each different quantum dots, pseudocolor is imposed for result demonstration. Thus, this method can pick out target signals with different spectral properties well. Gray gravity method is used to calculate the center of the fluorescence spot after multispectral unmixing. The result is shown in Fig. 6.

Fig. 4

Image data of three different quantum dots with three different emission spectrums in mice. Each image is captured in a different spectral band, 711 nm (a), 759 nm (b), 780 nm (c), 794 nm (d), 850 nm (e), 880 nm (f), 904 nm (g), 938 nm (h).

JBO_19_11_116002_f004.png

Fig. 5

The results of target extraction. Three different targets are shown in (a), (b), (c), respectively. Pseudo-color is imposed for result demonstration in (d).

JBO_19_11_116002_f005.png

Fig. 6

The center coordinates of the fluorescence target spot is calculated after multispectral unmixing using gray gravity method. The result is shown in (b).

JBO_19_11_116002_f006.png

3.3.

Quantitative Analyses Using Tissue Equivalent Material

In order to simulate the role of light absorption and scattering of small animal’s tissue, equivalent material with similar optical properties is selected for simulation experiment. We select agar-gel, and the preparation method is as follows: First of all, 2.4 g agarose is added in 110 ml phosphate buffer (phosphate-buffered Saline), stirring constantly while heating, until clear bubble-free solution is obtained. The solution is cooled to 70°C, shaken the beaker gently, and added 10.5 ml 20% fat emulsion into it. The solution is poured out, cooled, and solidified. The solidified agar-gel is cut to the shape shown in Fig. 7(a). A shallow hole is dug on the arc surface, used for quantum dot injection. Agar-gel slices with thickness of 2, 4, and 6 mm are prepared. The slices are used for certain depth simulation by covering on the arc surface [see Fig. 7(b)].

Fig. 7

The shape of tissue equivalent material for quantitative analyses (a). The slices cover on the arc surface for certain depth simulation (b).

JBO_19_11_116002_f007.png

About 3μl quantum dots (QDs) solution is injected into the shallow hole on the arc surface. The agar-gel without covering is placed in the system and took multiperspective multispectral imaging. We then use the 2-, 4-, and 6-mm slices covering on the arc surface and taking multiperspective multispectral imaging using the system, respectively. The results are processed by multispectral unmixing and target spot center determination using the method mentioned above. Without covering, the spot center coordinates calculated from two perspectives are (212.8184, 236.4625), (291.7266, 236.4625). The coordinates are (212.5323, 236.7462), (291.4989, 236.7462); (212.2375, 237.0362), (291.4989, 237. 0362); (211.6789, 237.5191), (290.9237, 237.5191), when 2-, 4-, and 6-mm slices cover on the arc surface, respectively. Using the reprojection matrix Q to calculate the four groups of data, the 3-D coordinates under the left camera coordinates system are (46.83,13.15,240.10); (46.83,13.10,239.92); (46.82,13.03,239.62); (46.80,12.93,239.17), respectively. As can be seen, with the increase of slice’s thickness, the accuracy of the data has a certain reduction but very slightly. This experiment simulates the in-vivo fluorescent target at different depths.

3.4.

3-D Positioning

After obtaining the 2-D center coordinates of the target spot at both the left and right cameras, the 3-D coordinates of it under the left camera coordinate system can be acquired according to Eq. (7) and the calibration results. To obtain a more intuitive position, a standard checkerboard is placed on the rotary platform to create a reference coordinate system. Select feature points and the relationship between the references coordinates system and the left camera coordinates system can be learned. The target coordinates conversion from left camera system to the reference system is achieved. The 3-D coordinates of the three targets shown in Fig. 4 under the reference coordinates system are (38.3, 48.2, 21.6), (53.5, 65.6, 28.5), (62.5, 82.7, 26.2), respectively.

To test the accuracy of the system, a rectangular plate with no fluorescent is selected, and four QDs solution drops (about 2μl) are dripped on the four corners of the plate with accurate distance. The actual distances between the drops are 40 and 80 mm. Using the system for imaging and calculating, the 3-D coordinates of the four drops under the reference coordinates system are (43.3, 49.9, 8.6), (43.5, 73.2, 8.8), (54.9, 49.6, 8.8), (55.1, 72.9, 8.9). The calculated distances between these four drops are 79.86, 39.78, 80.12, and 39.83 mm, and this experiment indicates that the system can achieve high accuracy.

3.5.

Performance of Detecting Depth

We tested the capability of deep target detecting on our designed in-vivo fluorescent sensing system by comparing commercialized machine results (Maestro™ CRi, USA). We tested the penetration of Au:CdHgTe quantum dots in muscle and adipose tissue. Au:CdHgTe quantum dots are NI gold-doped CdHgTe QDs with higher photoluminescence and lower cytotoxicity. In the middle of a 96-holes plate, 50μl Au:CdHgTe quantum dots (emission peak at 840 nm) were added and covered by the muscle tissue. The results show that penetration depth is improved both in muscle and adipose tissue by our designed system. Figure 8 shows the penetration of Au:CdHgTe quantum dots in the muscle tissue. The images at the row (a) are captured using Maestro™ CRi instrument. The row (b) shows our system obtained images. The muscle tissue above the quantum dots is 0 mm (I), 20 mm (II), 25 mm (III), 30 mm (IV), respectively. Figure 9 shows a similar experiment that tests the penetration in the adipose tissue. The adipose tissue above the quantum dots is 30 mm (I) and 35 mm (II) thick. As can be seen, the difference between two systems is insignificant when the target depth is shallow. However, with the increase of the covering tissue’s thickness, the proposed system shows better imaging results. The experiment shows that when the target is quite deep, the penetration depth is increased by 20% in muscle tissue and increased by 16% in adipose tissue, using the proposed system compared with a commercialized machine results (Maestro CRi).

Fig. 8

The penetration of Au:CdHgTe quantum dots in the muscle tissue. The images in row (a) are captured using MaestroTM CRi instrument. Row (b) shows our system obtained images. The muscle tissue above the quantum dots is 0 mm (I), 20 mm (II), 25 mm (III), 30 mm (IV), respectively.

JBO_19_11_116002_f008.png

Fig. 9

The penetration of Au:CdHgTe quantum dots in the adipose tissue. The adipose tissue is 30-mm thick in column (I) and 35-mm thick in column (II). The row (a) is MaestroTM CRi captured photographs. Images at the row (b) are obtained using our designed imaging system.

JBO_19_11_116002_f009.png

4.

Conclusion

In this paper, the development of a NI in-vivo fluorescence imaging system is presented, and an algorithm for the 3-D coordinates computing based on binocular stereovision theory is demonstrated. The system includes a deep cooling CCD, liquid-core fiber, high-energy laser source, rotary platform, and anesthesia apparatus to realize a dependable system and to detect fluorescence signal from deep sources with high sensitivity. This system can provide more accurate target location by calculating 3-D coordinates. The algorithm has several features as image preprocessing, multispectral unmixing, target spot center calculating, camera calibration, stereo calibration, and 3-D positioning. The validity of the algorithm has been verified by the experiments on the mouse. The designed 3-D positioning will provide more accurate target location information to help researchers analyze the target metastasis as well as determine the relative positional relationship between the target and the organs.

Experimental results of the mouse and pork meat with NIR quantum dots demonstrated that the potential of the designed imaging system for in-vivo optical diagnostics from deep sources and accurate position calculation. It is expected that this NI in-vivo fluorescence imaging system will show its advantage in tumor marker detecting, drug tracking, and also in pharmacokinetic studies as well as in the assessment of tissue response to therapy.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (31270907, 61106071, 21275129), National Natural Science Foundation of China (61106071), National Key Foundation for Exploring Scientific Instruments (2013YQ470781), and the State Key Laboratory of Industrial Control Technology, Zhejiang University, China.

References

1. 

R. Atreyaet al., “In vivo imaging using fluorescent antibodies to tumor necrosis factor predicts therapeutic response in Crohn’s disease,” Nat. Med., 20 (3), 313 –318 (2014). http://dx.doi.org/10.1038/nm.3462 1078-8956 Google Scholar

2. 

M. Autieroet al., “In vivo tumor detection in small animals by hematoporphyrin-mediated fluorescence imaging,” Photomed. Laser Surg., 28 S97 –S103 (2010). http://dx.doi.org/10.1089/pho.2009.2567 PLDHA8 1549-5418 Google Scholar

3. 

J. H. Ryuet al., “In vivo fluorescence imaging for cancer diagnosis using receptor-targeted epidermal growth factor-based nanoprobe,” Biomaterials, 34 (36), 9149 –9159 (2013). http://dx.doi.org/10.1016/j.biomaterials.2013.08.026 BIMADU 0142-9612 Google Scholar

4. 

R. V. PapineniG. FekeS. Orton, “In vivo fluorescence imaging of internally illuminated molecular probes,” Faseb J., 25 1126.2 (2011). FAJOEC 0892-6638 Google Scholar

5. 

A. F. Steinet al., “Intravascular near infrared fluorescence molecular imaging identifies macrophages in vivo in thrombosis-prone plaques,” Circulation, 128 (22), A16926 (2013). CIRCAZ 0009-7322 Google Scholar

6. 

P. Chtcheprovet al., “A high-resolution in vivo molecular imaging technique based on x-ray fluorescence,” Med. Phys., 39 (6), 3620 –3620 (2012). http://dx.doi.org/10.1118/1.4734698 MPHYA6 0094-2405 Google Scholar

7. 

Y. Ardeshirpouret al., “Using in-vivo fluorescence imaging in personalized cancer diagnostics and therapy, an image and treat paradigm,” Technol. Cancer Res. Treat., 10 (6), 549 –560 (2011). http://dx.doi.org/10.1177/153303461101000605 1533-0346 Google Scholar

8. 

S. Andersson Engelset al., “In vivo fluorescence imaging for tissue diagnostics,” Phys. Med. Biol., 42 (5), 815 –824 (1997). http://dx.doi.org/10.1088/0031-9155/42/5/006 PHMBA7 0031-9155 Google Scholar

9. 

A. Rehemtullaet al., “Rapid and quantitative assessment of cancer treatment response using in vivo bioluminescence imaging,” Neoplasia, 2 (6), 491 –495 (2000). http://dx.doi.org/10.1038/sj.neo.7900121 1522-8002 Google Scholar

10. 

V. V. Tuchin, “Tissue optics: tomography and topography,” Proc. SPIE, 3726 168 –198 (1999). http://dx.doi.org/10.1117/12.341389 PSISDG 0277-786X Google Scholar

11. 

J. Nappet al., “Targeted luminescent near-infrared polymer-nanoprobes for in vivo imaging of tumor hypoxia,” Anal. Chem., 83 (23), 9039 –9046 (2011). http://dx.doi.org/10.1021/ac201870b ANCHAM 0003-2700 Google Scholar

12. 

Y.-P. Guet al., “Ultrasmall near-infrared Ag2Se quantum dots with tunable fluorescence for in vivo imaging,” J. Am. Chem. Soc., 134 (1), 79 –82 (2012). http://dx.doi.org/10.1021/ja2089553 JACSAT 0002-7863 Google Scholar

13. 

M. A. Calfonet al., “In vivo near infrared fluorescence (NIRF) intravascular molecular imaging of inflammatory plaque, a multimodal approach to imaging of atherosclerosis,” J. Vis. Exp., 54 e2257 (2011). http://dx.doi.org/10.3791/2257 1940-087X Google Scholar

14. 

Y. Caoet al., “Near-infrared quantum-dot-based non-invasive in vivo imaging of squamous cell carcinoma U14,” Nanotechnology, 21 (47), 475104 (2010). http://dx.doi.org/10.1088/0957-4484/21/47/475104 NNOTER 0957-4484 Google Scholar

15. 

S. H. Hanet al., “Au:CdHgTe quantum dots for in vivo tumor-targeted multispectral fluorescence imaging,” Anal. Bioanal. Chem., 403 (5), 1343 –1352 (2012). http://dx.doi.org/10.1007/s00216-012-5921-y ABCNBP 1618-2642 Google Scholar

16. 

J. Becet al., “Multispectral fluorescence lifetime imaging system for intravascular diagnostics with ultrasound guidance: in vivo validation in swine arteries,” J. Biophotonics, 7 (5), 281 –285 (2014). http://dx.doi.org/10.1002/jbio.v7.5 JBOIBX 1864-063X Google Scholar

17. 

M. Hassanet al., “Fluorescence lifetime imaging system for in vivo studies,” Mol. Imaging, 6 (4), 229 –236 (2007). MIOMBP 1535-3508 Google Scholar

18. 

Y. Inoueet al., “In vivo fluorescence imaging of the reticuloendothelial system using quantum dots in combination with bioluminescent tumour monitoring,” Eur. J. Nucl. Med. Mol. I, 34 (12), 2048 –2056 (2007). http://dx.doi.org/10.1007/s00259-007-0583-2 EJNMA6 1619-7070 Google Scholar

19. 

R. N. Razanskyet al., “Near-infrared fluorescence catheter system for two-dimensional intravascular imaging in vivo,” Opt. Express, 18 (11), 11372 –11381 (2010). http://dx.doi.org/10.1364/OE.18.011372 OPEXFF 1094-4087 Google Scholar

20. 

X. F. Zhanget al., “Development of a noncontact 3-D fluorescence tomography system for small animal in vivo imaging,” Proc. SPIE, 7191 71910D (2009). http://dx.doi.org/10.1117/12.808199 Google Scholar

21. 

X. L. BaoM. G. Li, “Defocus and binocular vision based stereo particle pairing method for 3D particle tracking velocimetry,” Opt. Laser Eng., 49 (5), 623 –631 (2011). http://dx.doi.org/10.1016/j.optlaseng.2011.01.015 OLENDN 0143-8166 Google Scholar

22. 

M. CaoG. M. ZhangY. M. Chen, “Stereo matching of light-spot image points in light pen in binocular stereo visual,” Optik, 125 (3), 1366 –1370 (2014). http://dx.doi.org/10.1016/j.ijleo.2013.08.029 OTIKAJ 0030-4026 Google Scholar

23. 

H. C. Lougeut-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Reading in Computer Vision: Issues, Problems, Principles, and Paradigms, 61 –62 Morgan Kaufmann, San Francisco (1987). Google Scholar

24. 

U. WeidnerW. Forstner, “Towards automatic building extraction from high-resolution digital elevation models,” ISPRS J. Photogramm., 50 (4), 38 –49 (1995). http://dx.doi.org/10.1016/0924-2716(95)98236-S IRSEE9 0924-2716 Google Scholar

25. 

Z. H. Zhang, “Digital photogrammetry and computer vision,” Geomatics Sci. Wuhan Univ., 29 (12), 1035 –1105 (2004). Google Scholar

26. 

P. F. Luoet al., “Accurate measurement of 3-dimensional deformations in deformable and rigid bodies using computer vision,” Exp. Mech., 33 (2), 123 –132 (1993). http://dx.doi.org/10.1007/BF02322488 EXMCAZ 0014-4851 Google Scholar

27. 

Z. X. Huet al., “Study of the performance of different subpixel image correlation methods in 3D digital image correlation,” Appl. Opt., 49 (21), 4044 –4051 (2010). http://dx.doi.org/10.1364/AO.49.004044 APOPAI 0003-6935 Google Scholar

28. 

M. Tateiba, “Theoretical study on wave propagation and scattering in random media and its application,” IEICE Trans. Electron., E93c (1), 3 –8 (2010). http://dx.doi.org/10.1587/transele.E93.C.3 IELEEJ 0916-8524 Google Scholar

29. 

Y. A. Kravtsov, “New effects in wave-propagation and scattering in random-media,” Appl. Opt., 32 (15), 2681 –2691 (1993). http://dx.doi.org/10.1364/AO.32.002681 APOPAI 0003-6935 Google Scholar

30. 

Z. X. Huet al., “Fluorescent stereo microscopy for 3D surface profilometry and deformation mapping,” Opt. Express, 21 (10), 11808 –11818 (2013). http://dx.doi.org/10.1364/OE.21.011808 OPEXFF 1094-4087 Google Scholar

31. 

X. Feaset al., “New near ultraviolet laser-induced native fluorescence detection coupled to HPLC to analyse residues of oxolinic acid and flumequine: a comparison with conventional xenon flash lamp,” Cyta-J Food, 7 (1), 15 –21 (2009). http://dx.doi.org/10.1080/11358120902850552 1947-6337 Google Scholar

32. 

R. Diamantet al., “High-brightness diode pump sources for solid-state and fiber laser pumping across 8xx-9xx nm range,” Proc. SPIE, 8039 80390E (2011). http://dx.doi.org/10.1117/12.883831 PSISDG 0277-786X Google Scholar

33. 

L. Zhanget al., “Colloidal PbSe quantum dot-solution-filled liquid-core optical fiber for 1.55 mu m telecommunication wavelengths,” Nanotechnology, 25 (10), 105704 (2014). http://dx.doi.org/10.1088/0957-4484/25/10/105704 NNOTER 0957-4484 Google Scholar

34. 

D. C. Chen, “Research of the measuring technique of numerical aperture of optical fiber using near infrared light,” Microw. Opt. Technol. Lett., 50 (3), 582 –584 (2008). http://dx.doi.org/10.1002/(ISSN)1098-2760 MOTLEO 0895-2477 Google Scholar

35. 

D. L. Markset al., “Study of an ultrahigh-numerical-aperture fiber continuum generation source for optical coherence tomography,” Opt. Lett., 27 (22), 2010 –2012 (2002). http://dx.doi.org/10.1364/OL.27.002010 OPLEDP 0146-9592 Google Scholar

36. 

R. YangH. L. MaH. P. Jin, “Influencing factors of external fluorescence seeding enhancing stimulated Raman scattering in liquid-core optical fiber,” J. Raman Spectrosc., 44 (12), 1689 –1692 (2013). http://dx.doi.org/10.1002/jrs.v44.12 JRSPAF 0377-0486 Google Scholar

37. 

P. C. AshokB. B. PraveenK. Dholakia, “Near infrared spectroscopic analysis of single malt Scotch whisky on an optofluidic chip,” Opt. Express, 19 (23), 22982 –22992 (2011). http://dx.doi.org/10.1364/OE.19.022982 OPEXFF 1094-4087 Google Scholar

38. 

K. SperlichH. Stolz, “Quantum efficiency measurements of (EM)CCD cameras: high spectral resolution and temperature dependence,” Meas. Sci. Technol., 25 (1), 015502 (2014). http://dx.doi.org/10.1088/0957-0233/25/1/015502 MSTCEP 0957-0233 Google Scholar

39. 

S. Nikzadet al., “Delta-doped electron-multiplied CCD with absolute quantum efficiency over 50% in the near to far ultraviolet range for single photon counting applications,” Appl. Opt., 51 (3), 365 –369 (2011). http://dx.doi.org/10.1364/AO.51.000365 APOPAI 0003-6935 Google Scholar

40. 

Z. J. Caiet al., “Quantification and elimination of the CCD dark current in weak spectrum measurement by modulation and correlation method,” Proc. SPIE, 7850 785005 (2010). http://dx.doi.org/10.1117/12.869369 0277-786X Google Scholar

41. 

J. Hewettet al., “The application of a compact multispectral imaging system with integrated excitation source to in vivo monitoring of fluorescence during topical photodynamic therapy of superficial skin cancers,” Photochem. Photobiol., 73 (3), 278 –282 (2001). http://dx.doi.org/10.1562/0031-8655(2001)0730278TAOACM2.0.CO2 PHCBAP 0031-8655 Google Scholar

42. 

M. S. KimA. M. LefcourtY. R. Chen, “Multispectral laser-induced fluorescence imaging system for large biological samples,” Appl. Opt., 42 (19), 3927 –3934 (2003). http://dx.doi.org/10.1364/AO.42.003927 APOPAI 0003-6935 Google Scholar

43. 

B. M. W. de Vrieset al., “Multispectral near-infrared fluorescence molecular imaging of matrix metalloproteinases in a human carotid plaque using a matrix-degrading metalloproteinase-sensitive activatable fluorescent probe,” Circulation, 119 (20), E534 –E536 (2009). http://dx.doi.org/10.1161/CIRCULATIONAHA.108.821389 CIRCAZ 0009-7322 Google Scholar

44. 

M. E. Martinet al., “Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection,” Ann. Biomed. Eng., 34 (6), 1061 –1068 (2006). http://dx.doi.org/10.1007/s10439-006-9121-9 ABMECF 0090-6964 Google Scholar

45. 

“Camera Calibration and 3D Reconstruction,” (2014). Google Scholar

46. 

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal., 22 (11), 1330 –1334 (2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

47. 

D. C. Brown, “Close-range camera calibration,” Photogramm. Eng., 37 (8), 855 –866 (1971). PGMEA9 0099-1112 Google Scholar

48. 

T. Y. Young, Handbook of Pattern Recognition and Image Processing: Computer Vision, Academic Press, Inc., Orlando, FL (1994). Google Scholar

Biographies of the authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Bofan Song, Wei Jin, Ying Wang, Qinhan Jin, and Ying Mu "In vivo near-infrared fluorescence three-dimensional positioning system with binocular stereovision," Journal of Biomedical Optics 19(11), 116002 (3 November 2014). https://doi.org/10.1117/1.JBO.19.11.116002
Published: 3 November 2014
Advertisement
Advertisement
KEYWORDS
Luminescence

In vivo imaging

3D acquisition

Imaging systems

Cameras

3D image processing

Signal detection

RELATED CONTENT


Back to Top