Endoscopic visualization in brain tumor removal is challenging because tumor tissue is often visually indistinguishable from healthy tissue. Fluorescence imaging can improve tumor delineation, though this impairs reflectance-based visualization of gross anatomical features. To accurately navigate and resect tumors, we created an ultrathin/flexible, scanning fiber endoscope (SFE) that acquires reflectance and fluorescence wide-field images at high-resolution. Furthermore, our miniature imaging system is affixed to a robotic arm providing programmable motion of SFE, from which we generate multimodal surface maps of the surgical field.
To test this system, synthetic phantoms of debulked tumor from brain are fabricated having spots of fluorescence representing residual tumor. Three-dimension (3D) surface maps of this surgical field are produced by moving the SFE over the phantom during concurrent reflectance and fluorescence imaging (30Hz video). SIFT-based feature matching between reflectance images is implemented to select a subset of key frames, which are reconstructed in 3D by bundle adjustment. The resultant reconstruction yields a multimodal 3D map of the tumor region that can improve visualization and robotic path planning.
Efficiency of creating these maps is important as they are generated multiple times during tumor margin clean-up. By using pre-programmed vector motions of the robot arm holding the SFE, the computer vision algorithms are optimized for efficiency by reducing search times. Preliminary results indicate that the time for creating these 3D multimodal maps of the surgical field can be reduced to one third by using known trajectories of the surgical robot moving the image-guided tool.
Fluorescence labeled biomarkers can be detected during endoscopy to guide early cancer biopsies, such as high-grade dysplasia in Barrett's Esophagus. To enhance intraoperative visualization of the fluorescence hot-spots, a mosaicking technique was developed to create full anatomical maps of the lower esophagus and associated fluorescent hot-spots. The resultant mosaic map contains overlaid reflectance and fluorescence images. It can be used to assist biopsy and document findings. The mosaicking algorithm uses reflectance images to calculate image registration between successive frames, and apply this registration to simultaneously acquired fluorescence images. During this mosaicking process, the fluorescence signal is enhanced through multi-frame averaging. Preliminary results showed that the technique promises to enhance the detectability of the hot-spots due to enhanced fluorescence signal.
Bladder cancer is the most expensive cancer to treat due to the high rate of recurrence. Though white light cystoscopy
is the gold standard for bladder cancer surveillance, the advent of fluorescence biomarkers provides an opportunity to
improve sensitivity for early detection and reduced recurrence resulting from more accurate excision. Ideally,
fluorescence information could be combined with standard reflectance images to provide multimodal views of the
bladder wall. The scanning fiber endoscope (SFE) of 1.2mm in diameter is able to acquire wide-field multimodal video
from a bladder phantom with fluorescence cancer "hot-spots". The SFE generates images by scanning red, green, and
blue (RGB) laser light and detects the backscatter signal for reflectance video of 500-line resolution at 30 frames per
second. We imaged a bladder phantom with painted vessels and mimicked fluorescent lesions by applying green
fluorescent microspheres to the surface. By eliminating the green laser illumination, simultaneous reflectance and
fluorescence images can be acquired at the same field of view, resolution, and frame rate. Moreover, the multimodal
SFE is combined with a robotic steering mechanism and image stitching software as part of a fully automated bladder
surveillance system. Using this system, the SFE can be reliably articulated over the entire 360° bladder surface.
Acquired images can then be stitched into a multimodal 3D panorama of the bladder using software developed in our
laboratory. In each panorama, the fluorescence images are exactly co-registered with RGB reflectance.
The high recurrence rate of bladder cancer requires patients to undergo frequent surveillance screenings over their lifetime following initial diagnosis and resection. Our laboratory is developing panoramic stitching software that would compile several minutes of cystoscopic video into a single panoramic image, covering the entire bladder, for review by an urolgist at a later time or remote location. Global alignment of video frames is achieved by using a bundle adjuster that simultaneously recovers both the 3D structure of the bladder as well as the scope motion using only the video frames as input. The result of the algorithm is a complete 360° spherical panorama of the outer surface. The details of the software algorithms are presented here along with results from both a virtual cystoscopy as well from real endoscopic imaging of a bladder phantom. The software successfully stitched several hundred video frames into a single panoramic with subpixel accuracy and with no knowledge of the intrinsic camera properties, such as focal length and radial distortion. In the discussion, we outline future work in development of the software as well as identifying factors pertinent to clinical translation of this technology.
The development of an ultrathin scanning fiber bronchoscope (SFB) at the University of Washington permits bronchoscopic examination of small peripheral airways inaccessible to conventional bronchoscopes. Due to the extensive branching in higher generation airways, a form of bronchoscopic guidance is needed. For accurate intraoperative localization of the SFB, we propose a hybrid approach, using electromagnetic tracking (EMT) and 2D/3D registration of bronchoscopic video images to a preoperative CT scan. Three similarity metrics were evaluated for CT-video registration, including normalized mutual information (NMI), dark-weighted NMI (dw-NMI), and a surface gradient matching (SGM) strategy. From four bronchoscopic sessions, CT-video registration using SGM proved to be more robust than NMI-based metrics, averaging 320 frames of tracking before failure as compared with 100 and 160 frame averages for NMI and dw-NMI metrics respectively. In the hybrid configuration, EMT and CT-video registration were blended using a Kalman filter to recursively refine the registration error between the EMT system and airway anatomy. As part of the implementation, respiratory motion compensation (RMC) was implemented by adaptively estimating respiratory phase-dependent deformation. With the addition of RMC, average hybrid tracking disagreement with a set of manually registered key frames was 3.36 mm as compared with 6.30 mm when RMC was not used. In peripheral airway regions that undergo larger respiratory-induced deformation, disagreement was only 2.01 mm with RMC on average, as compared with 8.65 mm otherwise.
Deformable registration of chest CT scans taken of a subject at various phases of respiration provide a direct
measure of the spatially varying displacements that occur in the lung due to breathing. This respiratory motion
was studied as part of the development of a CT-based guidance system for a new electromagnetically tracked
ultrathin bronchoscope. Fifteen scans of an anesthesized pig were acquired at five distinct lung pressures between
full expiration to full inspiration. Deformation fields were computed by non-rigid registration using symmetric
"demons" forces followed by Gaussian regularization in a multi-resolution framework. Variants of the registration
scheme were tested including: initial histogram matching of input images, degree of field smoothing during
regularization, and applying an adaptive smoothing method that weights elements of the smoothing kernel by
the magnitude of the image gradient. Registration quality was quantified and compared using inverse and
transitive consistency metrics. After optimizing the algorithm parameters, deformation fields were computed by
registering each image in the set to a baseline image. Registration of the baseline image at full inspiration to
an image at full expiration produced the maximum deformation. Two hypotheses were made: first, that each
deformation could be modeled as a mathematical sub-multiple of the maximum deformation, and second, that
the deformation scales linearly with respiratory pressure. The discrepancy between the deformation measured by
image registration and that predicted by the linear model was 1.25 mm on average. At maximum deformation,
this motion compensation constitutes an 87% reduction in respiration-induced localization error.
Recent studies have shown that more than 5 million bronchoscopy procedures are performed each year worldwide. The
procedure usually involves biopsy of possible cancerous tissues from the lung. Standard bronchoscopes are too large to
reach into the peripheral lung, where cancerous nodules are often found. The University of Washington has developed an
ultrathin and flexible scanning fiber endoscope that is able to advance into the periphery of the human lungs without
sacrificing image quality. To accompany the novel endoscope, we have developed a user interface that serves as a
navigation guide for doctors when performing a bronchoscopy. The navigation system consists of a virtual surface mesh
of the airways extracted from computed-tomography (CT) scan and an electromagnetic tracking system (EMTS). The
complete system can be viewed as a global positioning system for the lung that provides pre-procedural planning
functionalities, virtual bronchoscopy navigation, and real time tracking of the endoscope inside the lung. The real time
virtual navigation is complemented by a particle filter algorithm to compensate for registration errors and outliers, and to
prevent going through surfaces of the virtual lung model. The particle filter method tracks the endoscope tip based on
real time tracking data and attaches the virtual endoscopic view to the skeleton that runs inside the virtual airway surface.
Experiment results on a dried sheep lung show that the particle filter method converges and is able to accurately track the
endoscope tip in real time when the endoscope is inserted both at slow and fast insertion speeds.