Open Access
16 February 2018 Fiber bundle endomicroscopy with multi-illumination for three-dimensional reflectance image reconstruction
Yoriko Ando, Hirohito Sawahata, Takeshi Kawano, Kowa Koida, Rika Numano
Author Affiliations +
Abstract
Bundled fiber optics allow in vivo imaging at deep sites in a body. The intrinsic optical contrast detects detailed structures in blood vessels and organs. We developed a bundled-fiber-coupled endomicroscope, enabling stereoscopic three-dimensional (3-D) reflectance imaging with a multipositional illumination scheme. Two illumination sites were attached to obtain reflectance images with left and right illumination. Depth was estimated by the horizontal disparity between the two images under alternative illuminations and was calibrated by the targets with known depths. This depth reconstruction was applied to an animal model to obtain the 3-D structure of blood vessels of the cerebral cortex (Cereb cortex) and preputial gland (Pre gla). The 3-D endomicroscope could be instrumental to microlevel reflectance imaging, improving the precision in subjective depth perception, spatial orientation, and identification of anatomical structures.

Fiber bundle optics is one of the greatest advances in recent decades for medical applications that image areas deep inside of living organisms, which are inaccessible via conventional optical imaging.13 Reflectance imaging by a fiber bundle can detect the intrinsic optical contrast of blood vessels and tissue structures. It has been used as part of multimodal imaging along with functional imaging such as fluorescence and bioluminescence.4,5 When placing an illumination source adjacent to the imaging optics, superficial tissue in contact with the imaging plane is illuminated from behind by the scattered light from deeper tissue layers [Fig. 1(a1)]. The scattered light provides a pseudovertical illumination source, and the observed image is similar to that of a transmission microscope [Fig. 1(a2)].5,6 The observed reflectance image usually appears as a “shadow” of the target object thus loses all depth information. Depth information can be obtained by three-dimensional (3-D) technology, improving the precision when dealing with real structures. Stereoscopy is a technique used to enable depth perception by binocular vision. Some endoscopes have three-dimensional (3-D) technology.7 In general, they have a pair of cameras located on the left and right sides. The images are delivered separately to each eye of the viewer, and these two-dimensional (2-D) images are integrated in the brain to give rise to 3-D depth perception. This technique enhances subjective depth perception, spatial segmentation, and identification of anatomical structures compared with 2-D endoscopy.8 Here, we adopt 3-D techniques for reflectance imaging using a bundled-fiber-coupled endomicroscope with a multipositional illumination scheme [Fig. 1(b)]. Switching the illumination position (Ill. A and Ill. B) altered the position of the shadow horizontally. The shift depends on the depth of the target (shadow A and shadow B), as shown in Fig. 1(a2). Thus, changes in illumination acted like a pair of cameras in a conventional 3-D endoscope. If the depth of an object is estimated by comparing each reflectance image, then 3-D reconstruction can be realized. The depth resolution is poorly limited compared to modern 3-D microscope imaging, such as confocal, two-photon, and photoacoustic microscopy. A part of this study was reported elsewhere.9

Fig. 1

(a) Reflectance imaging by a fiber bundle with illumination fibers. (Illu. A, illumination A; Illu. B, illumination B). (b) Multipositional illumination. (b1) Tip of the fiber bundle with illumination fibers. (b2) Four illumination conditions. (c) Fiber-bundle-coupled endomicroscope system with the multi-illumination. (OL, objective lens; TL, tube lens).

JBO_23_2_020502_f001.png

The experimental setup consisted of an objective lens (×20, NA 0.5), a tube lens (f=180) optically coupled to the fiber bundle (DP-SMD002; TAC), and a cooled EM-CCD camera (90±5°C, iXon; Andor Technology) [Fig. 1(c)]. Light from an LED [white or 430 nm-peak (99,430, LED-ON)] was optically coupled to 12 single-mode fibers (250-μm diameter, CK-10, Mitsubishi Rayon) surrounding the fiber bundle. Each fiber was used as a separate illumination light source [Fig. 1(b1)]. For reflectance imaging of mouse blood vessels, blue LED (peak wavelength of 430 nm) was used to highlight the existence of hemoglobin. Three of the 12 illumination fibers were coupled to a single LED. Hence, the target could be illuminated from four sites alternatively [Fig. 1(b2)]. After placing the bundle fiber tip at the target’s surface, light reflected from the target was collected by the lens system with time integrals of 0.1 to 1 s and imaged using a CCD camera. Shade due to ununiform illumination was eliminated by subtracting the 2-D square function, which was derived by the least square fitting. Imaging data were analyzed by MATLAB® (MathWorks) and processed by MATLAB® and Photoshop (Adobe). We performed reflectance imaging for both artificial and biological targets. Microbeads and an ink-mark printed on a transparent polyester (PET) film (overhead projector film) with a known thickness were used to confirm the ability of the multipositional illumination scheme to detect depth. Target objects located at unknown depths, including blood vessels of mouse Cereb cortex and Pre gla, were used to examine the applicability of this approach. C57BL/6N mice (SLC) were anesthetized with a 30% urethane solution (1.5  g/kg body weight, i.p. injection). The hair around the target organs was trimmed. The skin was cut for observations by bundled-fiber-coupled endomicroscopy. To image the Cereb cortex, the skull was opened and images were observed from above the dura layer. This study was carried out in strict accordance with the Guide for the Care and Use of Laboratory Animals from TUT and approved by the Animal Research Committee of TUT.

Reflectance imaging of the microbeads under four illumination conditions was performed by changing the distance from the microbeads to the tip of the fiber [Fig. 2(a)]. Microbeads (10 to 15  μm, FluoSpheres polystyrene microspheres, F8833; Molecular Probes) with diameters of 10  μm were scattered at the surface of a 0.5% agar gel in water. Figure 2(b) shows the reflectance images of 10-μm-microbeads under the four illumination conditions, which are arbitrary, colored in pink, blue, green, and yellow at distance of 0, 300, and 500  μm. The right bottom image is a magnification of the area surrounded by a square in the main image. When the distance was 0  μm, the reflectance images of the microbeads almost overlapped. However, the beads at a distance of 300 and 500  μm shifted depending on the illuminating position. We calculated the shift of the beads in the opposite-side illumination conditions (1–3, and 2–4). Figure 2(c) shows the shift as a function of distance. The data show the mean±SE (n=16). The shift increased linearly in accordance with the distance from the microbeads to the tip of the fiber. Next, we examined the relative depth measurements of two targets separated by a known distance. A transparent PET film (100  μm thick) was used, and ink-marks were printed onto the surface by a laser printer (Docu Centre-V C2275, Fuji Xerox). Each distance condition was prepared using two films. A distance of 100  μm was realized by piling two films sequentially [left of Fig. 3(a)]. A distance of 200  μm was realized by flipping the deeper film [right of Fig. 3(a)]. The reflectance images of the two ink-marks with illuminations 3 and 1 are shown in Figs. 3(b1), 3(b2), 3(b4), and 3(b5). The composite images of the reflectance images with illuminations 1 and 3, which are binarized and colored in red and green, respectively, are also shown in Figs. 3(b3) and 3(b6) to indicate the difference between the reflectance images. Yellow denotes the overlapping area in the composite image. Red or green area in the reflectance image corresponds the shift of targets depending on the depth of the target. The ink-mark at 0  μm with illuminations 1 and 3 overlapped in the composite image [left ink-mark in Figs. 3(b3)and 3(b6)]. On the other hand, the ink-marks on the far PET film located 100 and 200  μm indicated shifts (denoted by the triangles), as shown in Figs. 3(b3) and 3(b6), respectively. Hence, the shift of images of a target at a deeper site depends on the illumination. We also obtained the reflectance images of blood vessels of mouse Cereb cortex and Pre gla [Figs. 3(b7), 3(b8), 3(b10), and 3(b11), respectively]. These target objects have unknown depths. The terminals in the blood vessels of the mouse brain (indicated by triangles) showed large shifts [Fig. 3(b9)], implying that the terminal was much deeper compared to other areas. The blood vessels of the Cereb cortex usually exist on its surface and extend deep into brain sites.10 Thus, our system can detect surface blood vessels, diving trunks, and their branches. The image of Pre gla shows the structure of the blood vessels as well as the tissue structure [Figs. 3(b10)3(b12)].

Fig. 2

(a) Schematic of the reflectance imaging of the microbeads. (b) Superposition of four reflectance images of microbeads with illuminations 1, 2, 3, and 4. (c) Shift values between the centers of the reflectance images of the microbeads with opposite illuminations. Black and white dots denote comparison pairs of 1–3 and 2–4, respectively.

JBO_23_2_020502_f002.png

Fig. 3

(a) Schematic of the reflectance imaging for the ink-mark on the PET film. (b) Reflectance images and the composite images of the ink-mark on the PET film and the blood vessel of a mouse Cereb cortex and Pre gla with illuminations 3 and 1. Right column shows superposition of the reflectance images by binarizing and displayed in green and red with illuminations 3 and 1, respectively.

JBO_23_2_020502_f003.png

The depth of each element in the image was estimated by comparing two images, which were acquired under different illumination conditions. We used a window-based correlative calculation to find stereo matching. The matching point in the region of interest (ROI; ROI size was basically 16×16 pix but if >90% of areas contained the blood vessel, larger ROI, 32×32 to 128×128 pix, was introduced.) of the first image was searched from slightly offset ROIs in the second image. After identifying the maximum correlation, the distance between the original ROI and the matched ROI was calculated as the shift value. Two shift values were obtained for each object with two possible illumination pairs (1–3 and 2–4), because these pairs provided more reliable depth information, which have larger shift values than the other assumed pairs. The mean of the two values for each target is shown in color [Fig. 4(a)] and the 3-D surface plot [Fig. 4(b)]. As shown in Figs. 4(a1)and 4(a2), the shift value of the ink-mark on the PET film became larger as the distance from the fiber bundle to the film surface increased. The red circle in Fig. 4(c) indicates the shift value against the depth of the ink-mark. The data show the mean±SE (0  μm, n=7; 100  μm, n=3; 200  μm, n=4). The shift values changed linearly with the depth from the fiber bundle. Next, we examined the reliability of the estimated depth by comparing the first and second experiments [Fig. 4(c)]. The gray circle indicates the average of the shift value of the microbeads calculated by Fig. 2(c). Although both shift values changed linearly, the values differed by about 50%. There was a difference in size between the microbead and the ink-mark; however, the effect of size on the shift values was negligible (<1%), which was confirmed by the geometrical calculation (data were not shown). One reason for the difference is the variation in the scattering properties of the materials (ager gel and PET film). In other words, the bundled-fiber-coupled endomicroscope with a multipositional illumination scheme cannot estimate the absolute depth of an object precisely. Instead, it provides a reliable relative index. The shift values of the blood vessels of mouse Cereb cortex and Pre gla were also obtained, as shown in Figs. 4(a3), 4(a4), 4(b3), and 4(b4). Applying the results of microbeads and ink-marks on the PET film, which have known depth as shown in Fig. 4(c), the depth of the deepest region of the blood vessels of the Cereb cortex and Pre gla is estimated as 200 to 350  μm. However, the material’s scattering property may induce an error.

Fig. 4

(a) Shift values obtained by cross-correlation analysis of the reflectance images. (b) 3-D surface plot of (a). (c) Shift values of the ink-mark in the reflectance image of the PET film (red-filled circle). Gray open circle indicates the average of the shift value of the microbeads calculated by Fig. 2(c).

JBO_23_2_020502_f004.png

The relative depth was also simply demonstrated by stereo viewing. Reflectance images with right illumination (illumination 3) and left illumination (illumination 1) correspond to the stereo pairs shown in Fig. 3(b). Observing these stereo pairs through a stereoscope (stereo mirror viewer; Kokon) or by the parallel viewing method, which makes the white dots in the left and middle panels in Fig. 3(b) overlap, results in the perception of 3-D depth. Two ink-marks existed at different depths in the OHP film. The blood vessels of the Cereb cortex appeared at the surface and then dived into the deep brain, whereas the blood vessels of Pre gla appeared to be floating on the background structures. This stereo display can be applied to an online 3-D system. LabVIEW (National Instruments) has developed online image acquisition and processing. The illumination of two LEDs was controlled by an analog output (SCB-68, National Instruments) and switched alternately, as shown in Fig. 5. Image acquisition was synchronized with the onset of each illumination. The reflectance images with left and right illumination conditions were displayed on the right and left side of the screen, respectively. Then, the experimenter successfully observed the images through the stereoscope, confirming that this system works properly as relative depth is perceived. Optical techniques for the 3-D surface reconstruction of endoscopes have been employed in clinical applications.11 The online 3-D endomicroscope could greatly assist the manipulation of a fiber tip to determine the target area using morphological features as part of multimodal imaging in conjunction with functional imaging. We anticipate that endocytoscopic applications of 3-D techniques will facilitate the examination, which could improve the effectiveness of optical biopsies and make them viable high-definition alternatives to histology. Our current system has separate illumination fibers and imaging fiber, resulting large size and large invasiveness. The illumination through the imaging fiber itself1214 may solve the problem.

Fig. 5

Schematic of the online 3-D endomicroscope system. DAQ, analog output system; Sync., synchronization.

JBO_23_2_020502_f005.png

Disclosures

The authors declare that they have no competing financial interest.

Acknowledgments

This work was supported by the Program to Foster Young Researchers in Cutting-Edge Interdisciplinary Research from Ministry of Education, Culture, Sports, Science and Technology (MEXT)/the Japan Science and Technology Agency (JST) (to K.K. and R.N.). It was also supported by the Promotion of Science (JSPS) KAKENHI Grant Nos. (25135718 and 15H05917 to K.K., 26709024 and 17H03250 to T.K., and 24590350 and 15H03901 to R.N.). R.N. was supported in part by the Visionary Research fund from the Takeda Science Foundation, and Research Grant for Science and Technology Innovation from TUT. R.N. and Y.A. were also supported in Research fund from Research Foundation for Opto-Science and Technology. We thank Dr. Takashi Sakurai (Juntendo University), Mitsuo Natsume (Denko-sha), and Dr. Shigeki Nakauchi (TUT) for developing the optical systems, Minako Matsuo (TUT) for technical support, and Naobumi Kimura (TUT) for animal care. KK and YA invented the concept and designed the experiments. YA, HS, and KK built the experimental setup and performed the experiment. YA, KK, and HS analyzed the data. The manuscript was written by YA, KK, and RN. This study was coordinated by RN, KK, and TK. All authors read and approved the final manuscript.

References

1. 

B. A. Flusberg et al., “Fiber-optic fluorescence imaging,” Nat. Methods, 2 (12), 941 –950 (2005). https://doi.org/10.1038/nmeth820 1548-7091 Google Scholar

2. 

G. Keiser et al., “Review of diverse optical fibers used in biomedical research and clinical practice,” J. Biomed. Opt., 19 (8), 080902 (2014). https://doi.org/10.1117/1.JBO.19.8.080902 Google Scholar

3. 

A. D. Mehta et al., “Fiber optic in vivo imaging in the mammalian nervous system,” Curr. Opin. Neurobiol., 14 (5), 617 –628 (2004). https://doi.org/10.1016/j.conb.2004.08.017 COPUEN 0959-4388 Google Scholar

4. 

Y. Ando et al., “In vivo bioluminescence and reflectance imaging of multiple organs in bioluminescence reporter mice by bundled-fiber-coupled microscopy,” Biomed. Opt. Express, 7 (3), 963 –978 (2016). https://doi.org/10.1364/BOE.7.000963 BOEICL 2156-7085 Google Scholar

5. 

M. Hughes, T. P. Chang and G. Z. Yang, “Fiber bundle endocytoscopy,” Biomed. Opt. Express, 4 (12), 2781 –2794 (2013). https://doi.org/10.1364/BOE.4.002781 BOEICL 2156-7085 Google Scholar

6. 

T. Ohigashi et al., “Endocytoscopy: novel endoscopic imaging technology for in-situ observation of bladder cancer cells,” J. Endourol., 20 (9), 698 –701 (2006). https://doi.org/10.1089/end.2006.20.698 Google Scholar

7. 

U. D. Mueller-Richter et al., “Possibilities and limitations of current stereo-endoscopy,” Surg. Endosc., 18 (6), 942 –947 (2004). https://doi.org/10.1007/s00464-003-9097-6 Google Scholar

8. 

H. A. Zaidi et al., “Efficacy of three-dimensional endoscopy for ventral skull base pathology: a systematic review of the literature,” World Neurosurg., 86 419 –431 (2016). https://doi.org/10.1016/j.wneu.2015.10.004 Google Scholar

9. 

Y. Ando et al., “Reflectance imaging by fiber bundle endoscope: vertical reconstruction by multipositional illumination,” 020009 (2016). Google Scholar

10. 

H. Uhlirova et al., “The roadmap for estimation of cell-type-specific neuronal activity from non-invasive measurements,” Philos. Trans. R. Soc. London B, 371 (1705), 20150356 (2016). https://doi.org/10.1098/rstb.2015.0356 Google Scholar

11. 

L. Maier-Hein et al., “Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery,” Med. Image Anal., 17 (8), 974 –996 (2013). https://doi.org/10.1016/j.media.2013.04.003 Google Scholar

12. 

M. Hughes, P. Giataganas and G. Z. Yang, “Color reflectance fiber bundle endomicroscopy without back-reflections,” J. Biomed. Opt., 19 (3), 030501 (2014). https://doi.org/10.1117/1.JBO.19.3.030501 JBOPFO 1083-3668 Google Scholar

13. 

X. Liu, Y. Huang and J. U. Kang, “Dark-field illuminated reflectance fiber bundle endoscopic microscope,” J. Biomed. Opt., 16 (4), 046003 (2011). https://doi.org/10.1117/1.3560298 JBOPFO 1083-3668 Google Scholar

14. 

J. Sun et al., “Needle-compatible single fiber bundle image guide reflectance endoscope,” J. Biomed. Opt., 15 (4), 040502 (2010). https://doi.org/10.1117/1.3465558 JBOPFO 1083-3668 Google Scholar
CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yoriko Ando, Hirohito Sawahata, Takeshi Kawano, Kowa Koida, and Rika Numano "Fiber bundle endomicroscopy with multi-illumination for three-dimensional reflectance image reconstruction," Journal of Biomedical Optics 23(2), 020502 (16 February 2018). https://doi.org/10.1117/1.JBO.23.2.020502
Received: 24 September 2017; Accepted: 17 January 2018; Published: 16 February 2018
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Reflectivity

3D acquisition

3D image processing

Blood vessels

Positron emission tomography

Endomicroscopy

Image restoration

RELATED CONTENT

3D shape recovery from image focus using gray level co...
Proceedings of SPIE (April 13 2018)
MAPEM Net an unrolled neural network for Fully 3D...
Proceedings of SPIE (May 28 2019)
Trini Diagram: imaging emotional identity 3D positioning tool
Proceedings of SPIE (December 20 1999)
High accuracy 3D laser radar
Proceedings of SPIE (September 13 2004)

Back to Top