Positron emission tomography (PET) and X-ray computed tomography (CT) serve as major diagnostic imaging modalities in the lung-cancer staging process. Modern scanners provide co-registered whole-body PET/CT studies, collected while the patient breathes freely, and high-resolution chest CT scans, collected under a brief patient breath hold. Unfortunately, no method exists for registering a PET/CT study into the space of a high-resolution chest CT scan. If this could be done, vital diagnostic information offered by the PET/CT study could be brought seamlessly into the procedure plan used during live cancer-staging bronchoscopy. We propose a method for the deformable registration of whole-body PET/CT data into the space of a high-resolution chest CT study. We then demonstrate its potential for procedure planning and subsequent use in multimodal image-guided bronchoscopy.
Endobronchial ultrasound (EBUS) is now recommended as a standard procedure for in vivo verification of extraluminal diagnostic sites during cancer-staging bronchoscopy. Yet, physicians vary considerably in their skills at using EBUS effectively. Regarding existing bronchoscopy guidance systems, studies have shown their effectiveness in the lung-cancer management process. With such a system, a patient's X-ray computed tomography (CT) scan is used to plan a procedure to regions of interest (ROIs). This plan is then used during follow-on guided bronchoscopy. Recent clinical guidelines for lung cancer, however, also dictate using positron emission tomography (PET) imaging for identifying suspicious ROIs and aiding in the cancer-staging process. While researchers have attempted to use guided bronchoscopy systems in tandem with PET imaging and EBUS, no true EBUS-centric guidance system exists. We now propose a full multimodal image-based methodology for guiding EBUS. The complete methodology involves two components: 1) a procedure planning protocol that gives bronchoscope movements appropriate for live EBUS positioning; and 2) a guidance strategy and associated system graphical user interface (GUI) designed for image-guided EBUS. We present results demonstrating the operation of the system.
Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system’s potential.
Recently developed integrated PET-CT scanners give co-registered multimodal data sets that offer complementary three-dimensional (3D) digital images of the chest. PET (positron emission tomography) imaging gives highly specific functional information of suspect cancer sites, while CT (X-ray computed tomography) gives associated anatomical detail. Because the 3D CT and PET scans generally span the body from the eyes to the knees, accurate definition of the intrathoracic region is vital for focusing attention to the central-chest region. In this way, diagnostically important regions of interest (ROIs), such as central-chest lymph nodes and cancer nodules, can be more efficiently isolated. We propose a method for automatic segmentation of the intrathoracic region from a given co-registered 3D PET-CT study. Using the 3D CT scan as input, the method begins by finding an initial intrathoracic region boundary for a given 2D CT section. Next, active contour analysis, driven by a cost function depending on local image gradient, gradient-direction, and contour shape features, iteratively estimates the contours spanning the intrathoracic region on neighboring 2D CT sections. This process continues until the complete region is defined. We next present an interactive system that employs the segmentation method for focused 3D PET-CT chest image analysis. A validation study over a series of PET-CT studies reveals that the segmentation method gives a Dice index accuracy of less than 98%. In addition, further results demonstrate the utility of the method for focused 3D PET-CT chest image analysis, ROI definition, and visualization.
Integrated positron emission tomography (PET) / computed-tomography (CT) scanners give 3D multimodal data sets of the chest. Such data sets offer the potential for more complete and specific identification of suspect lesions and lymph nodes for lung-cancer assessment. This in turn enables better planning of staging bronchoscopies. The richness of the data, however, makes the visualization and planning process difficult. We present an integrated multimodal 3D PET/CT system that enables efficient region identification and bronchoscopic procedure planning. The system first invokes a series of automated 3D image-processing methods that construct a 3D chest model. Next, the user interacts with a set of interactive multimodal graphical tools that facilitate procedure planning for specific regions of interest (ROIs): 1) an interactive region candidate list that enables efficient ROI viewing in all tools; 2) a virtual PET-CT bronchoscopy rendering with SUV quantitative visualization to give a “fly through" endoluminal view of prospective ROIs; 3) transverse, sagittal, coronal multi-planar reformatted (MPR) views of the raw CT, PET, and fused CT-PET data; and 4) interactive multimodal volume/surface rendering to give a 3D perspective of the anatomy and candidate ROIs. In addition the ROI selection process is driven by a semi-automatic multimodal method for region identification. In this way, the system provides both global and local information to facilitate more specific ROI identification and procedure planning. We present results to illustrate the system's function and performance.