Fiber bundle optics is one of the greatest advances in recent decades for medical applications that image areas deep inside of living organisms, which are inaccessible via conventional optical imaging.12.–3 Reflectance imaging by a fiber bundle can detect the intrinsic optical contrast of blood vessels and tissue structures. It has been used as part of multimodal imaging along with functional imaging such as fluorescence and bioluminescence.4,5 When placing an illumination source adjacent to the imaging optics, superficial tissue in contact with the imaging plane is illuminated from behind by the scattered light from deeper tissue layers [Fig. 1(a1)]. The scattered light provides a pseudovertical illumination source, and the observed image is similar to that of a transmission microscope [Fig. 1(a2)].5,6 The observed reflectance image usually appears as a “shadow” of the target object thus loses all depth information. Depth information can be obtained by three-dimensional (3-D) technology, improving the precision when dealing with real structures. Stereoscopy is a technique used to enable depth perception by binocular vision. Some endoscopes have three-dimensional (3-D) technology.7 In general, they have a pair of cameras located on the left and right sides. The images are delivered separately to each eye of the viewer, and these two-dimensional (2-D) images are integrated in the brain to give rise to 3-D depth perception. This technique enhances subjective depth perception, spatial segmentation, and identification of anatomical structures compared with 2-D endoscopy.8 Here, we adopt 3-D techniques for reflectance imaging using a bundled-fiber-coupled endomicroscope with a multipositional illumination scheme [Fig. 1(b)]. Switching the illumination position (Ill. A and Ill. B) altered the position of the shadow horizontally. The shift depends on the depth of the target (shadow A and shadow B), as shown in Fig. 1(a2). Thus, changes in illumination acted like a pair of cameras in a conventional 3-D endoscope. If the depth of an object is estimated by comparing each reflectance image, then 3-D reconstruction can be realized. The depth resolution is poorly limited compared to modern 3-D microscope imaging, such as confocal, two-photon, and photoacoustic microscopy. A part of this study was reported elsewhere.9
The experimental setup consisted of an objective lens (, NA 0.5), a tube lens () optically coupled to the fiber bundle (DP-SMD002; TAC), and a cooled EM-CCD camera (, iXon; Andor Technology) [Fig. 1(c)]. Light from an LED [white or 430 nm-peak (99,430, LED-ON)] was optically coupled to 12 single-mode fibers ( diameter, CK-10, Mitsubishi Rayon) surrounding the fiber bundle. Each fiber was used as a separate illumination light source [Fig. 1(b1)]. For reflectance imaging of mouse blood vessels, blue LED (peak wavelength of 430 nm) was used to highlight the existence of hemoglobin. Three of the 12 illumination fibers were coupled to a single LED. Hence, the target could be illuminated from four sites alternatively [Fig. 1(b2)]. After placing the bundle fiber tip at the target’s surface, light reflected from the target was collected by the lens system with time integrals of 0.1 to 1 s and imaged using a CCD camera. Shade due to ununiform illumination was eliminated by subtracting the 2-D square function, which was derived by the least square fitting. Imaging data were analyzed by MATLAB® (MathWorks) and processed by MATLAB® and Photoshop (Adobe). We performed reflectance imaging for both artificial and biological targets. Microbeads and an ink-mark printed on a transparent polyester (PET) film (overhead projector film) with a known thickness were used to confirm the ability of the multipositional illumination scheme to detect depth. Target objects located at unknown depths, including blood vessels of mouse Cereb cortex and Pre gla, were used to examine the applicability of this approach. C57BL/6N mice (SLC) were anesthetized with a 30% urethane solution ( body weight, i.p. injection). The hair around the target organs was trimmed. The skin was cut for observations by bundled-fiber-coupled endomicroscopy. To image the Cereb cortex, the skull was opened and images were observed from above the dura layer. This study was carried out in strict accordance with the Guide for the Care and Use of Laboratory Animals from TUT and approved by the Animal Research Committee of TUT.
Reflectance imaging of the microbeads under four illumination conditions was performed by changing the distance from the microbeads to the tip of the fiber [Fig. 2(a)]. Microbeads (10 to , FluoSpheres polystyrene microspheres, F8833; Molecular Probes) with diameters of were scattered at the surface of a 0.5% agar gel in water. Figure 2(b) shows the reflectance images of -microbeads under the four illumination conditions, which are arbitrary, colored in pink, blue, green, and yellow at distance of 0, 300, and . The right bottom image is a magnification of the area surrounded by a square in the main image. When the distance was , the reflectance images of the microbeads almost overlapped. However, the beads at a distance of 300 and shifted depending on the illuminating position. We calculated the shift of the beads in the opposite-side illumination conditions (1–3, and 2–4). Figure 2(c) shows the shift as a function of distance. The data show the (). The shift increased linearly in accordance with the distance from the microbeads to the tip of the fiber. Next, we examined the relative depth measurements of two targets separated by a known distance. A transparent PET film ( thick) was used, and ink-marks were printed onto the surface by a laser printer (Docu Centre-V C2275, Fuji Xerox). Each distance condition was prepared using two films. A distance of was realized by piling two films sequentially [left of Fig. 3(a)]. A distance of was realized by flipping the deeper film [right of Fig. 3(a)]. The reflectance images of the two ink-marks with illuminations 3 and 1 are shown in Figs. 3(b1), 3(b2), 3(b4), and 3(b5). The composite images of the reflectance images with illuminations 1 and 3, which are binarized and colored in red and green, respectively, are also shown in Figs. 3(b3) and 3(b6) to indicate the difference between the reflectance images. Yellow denotes the overlapping area in the composite image. Red or green area in the reflectance image corresponds the shift of targets depending on the depth of the target. The ink-mark at with illuminations 1 and 3 overlapped in the composite image [left ink-mark in Figs. 3(b3)and 3(b6)]. On the other hand, the ink-marks on the far PET film located 100 and indicated shifts (denoted by the triangles), as shown in Figs. 3(b3) and 3(b6), respectively. Hence, the shift of images of a target at a deeper site depends on the illumination. We also obtained the reflectance images of blood vessels of mouse Cereb cortex and Pre gla [Figs. 3(b7), 3(b8), 3(b10), and 3(b11), respectively]. These target objects have unknown depths. The terminals in the blood vessels of the mouse brain (indicated by triangles) showed large shifts [Fig. 3(b9)], implying that the terminal was much deeper compared to other areas. The blood vessels of the Cereb cortex usually exist on its surface and extend deep into brain sites.10 Thus, our system can detect surface blood vessels, diving trunks, and their branches. The image of Pre gla shows the structure of the blood vessels as well as the tissue structure [Figs. 3(b10)–3(b12)].
The depth of each element in the image was estimated by comparing two images, which were acquired under different illumination conditions. We used a window-based correlative calculation to find stereo matching. The matching point in the region of interest (ROI; ROI size was basically pix but if of areas contained the blood vessel, larger ROI, to pix, was introduced.) of the first image was searched from slightly offset ROIs in the second image. After identifying the maximum correlation, the distance between the original ROI and the matched ROI was calculated as the shift value. Two shift values were obtained for each object with two possible illumination pairs (1–3 and 2–4), because these pairs provided more reliable depth information, which have larger shift values than the other assumed pairs. The mean of the two values for each target is shown in color [Fig. 4(a)] and the 3-D surface plot [Fig. 4(b)]. As shown in Figs. 4(a1)and 4(a2), the shift value of the ink-mark on the PET film became larger as the distance from the fiber bundle to the film surface increased. The red circle in Fig. 4(c) indicates the shift value against the depth of the ink-mark. The data show the (, ; , ; , ). The shift values changed linearly with the depth from the fiber bundle. Next, we examined the reliability of the estimated depth by comparing the first and second experiments [Fig. 4(c)]. The gray circle indicates the average of the shift value of the microbeads calculated by Fig. 2(c). Although both shift values changed linearly, the values differed by about 50%. There was a difference in size between the microbead and the ink-mark; however, the effect of size on the shift values was negligible (), which was confirmed by the geometrical calculation (data were not shown). One reason for the difference is the variation in the scattering properties of the materials (ager gel and PET film). In other words, the bundled-fiber-coupled endomicroscope with a multipositional illumination scheme cannot estimate the absolute depth of an object precisely. Instead, it provides a reliable relative index. The shift values of the blood vessels of mouse Cereb cortex and Pre gla were also obtained, as shown in Figs. 4(a3), 4(a4), 4(b3), and 4(b4). Applying the results of microbeads and ink-marks on the PET film, which have known depth as shown in Fig. 4(c), the depth of the deepest region of the blood vessels of the Cereb cortex and Pre gla is estimated as 200 to . However, the material’s scattering property may induce an error.
The relative depth was also simply demonstrated by stereo viewing. Reflectance images with right illumination (illumination 3) and left illumination (illumination 1) correspond to the stereo pairs shown in Fig. 3(b). Observing these stereo pairs through a stereoscope (stereo mirror viewer; Kokon) or by the parallel viewing method, which makes the white dots in the left and middle panels in Fig. 3(b) overlap, results in the perception of 3-D depth. Two ink-marks existed at different depths in the OHP film. The blood vessels of the Cereb cortex appeared at the surface and then dived into the deep brain, whereas the blood vessels of Pre gla appeared to be floating on the background structures. This stereo display can be applied to an online 3-D system. LabVIEW (National Instruments) has developed online image acquisition and processing. The illumination of two LEDs was controlled by an analog output (SCB-68, National Instruments) and switched alternately, as shown in Fig. 5. Image acquisition was synchronized with the onset of each illumination. The reflectance images with left and right illumination conditions were displayed on the right and left side of the screen, respectively. Then, the experimenter successfully observed the images through the stereoscope, confirming that this system works properly as relative depth is perceived. Optical techniques for the 3-D surface reconstruction of endoscopes have been employed in clinical applications.11 The online 3-D endomicroscope could greatly assist the manipulation of a fiber tip to determine the target area using morphological features as part of multimodal imaging in conjunction with functional imaging. We anticipate that endocytoscopic applications of 3-D techniques will facilitate the examination, which could improve the effectiveness of optical biopsies and make them viable high-definition alternatives to histology. Our current system has separate illumination fibers and imaging fiber, resulting large size and large invasiveness. The illumination through the imaging fiber itself1213.–14 may solve the problem.
This work was supported by the Program to Foster Young Researchers in Cutting-Edge Interdisciplinary Research from Ministry of Education, Culture, Sports, Science and Technology (MEXT)/the Japan Science and Technology Agency (JST) (to K.K. and R.N.). It was also supported by the Promotion of Science (JSPS) KAKENHI Grant Nos. (25135718 and 15H05917 to K.K., 26709024 and 17H03250 to T.K., and 24590350 and 15H03901 to R.N.). R.N. was supported in part by the Visionary Research fund from the Takeda Science Foundation, and Research Grant for Science and Technology Innovation from TUT. R.N. and Y.A. were also supported in Research fund from Research Foundation for Opto-Science and Technology.