Segmentation of regions of interest (ROIs), such as suspect lesions, is a preliminary but vital step for computeraided breast cancer diagnosis, but the task is quite challenging due to image quality and the complicated phenomena that are usually involved with the ROIs. On one hand, it is possible for physicians and clinicians to dig out more information from imaging; on another hand, efficient, robust, and accurate segmentation of such kind of anatomical lesions is often a difficult and open task to researcher and technical development. As a counterbalance between automatic methods, which are usually highly application dependent, and manual approaches, which are too time consuming, live wire, which provide full user control during segmentation while minimizing user interaction, is a promising option for assisting in breast lesion segmentation in ultrasound (US) images. This work proposes a live-wire-based adjustment method to further extend its potentials in computer-aided diagnosis (CAD) applications. It allows for local boundary adjustment, based on the live-wire paradigms, for a given segmentation, and can be attached as a post-process step to the live wire method or other segmentation approaches.
Ultrasound imaging is an attractive modality for real-time image-guided interventions. Fusion of US imaging with a diagnostic imaging modality such as CT shows great potential in minimally invasive applications such as liver biopsy and ablation. However, significantly different representation of liver in US and CT turns this image fusion into a challenging task, in particular if some of the CT scans may be obtained without contrast agents. The liver surface, including the diaphragm immediately adjacent to it, typically appears as a hyper-echoic region in the ultrasound image if the proper imaging window and depth setting are used. The liver surface is also well visualized in both contrast and non-contrast CT scans, thus making the diaphragm or liver surface one of the few attractive common features for registration of US and non-contrast CT. We propose a fusion method based on point-to-volume registration of liver surface segmented in CT to a processed electromagnetically (EM) tracked US volume. In this approach, first, the US image is pre-processed in order to enhance the liver surface features. In addition, non-imaging information from the EM-tracking system is used to initialize and constrain the registration process. We tested our algorithm in comparison with a manually corrected vessel-based registration method using 8 pairs of tracked US and contrast CT volumes. The registration method was able to achieve an average deviation of 12.8mm from the ground truth measured as the root mean square Euclidean distance for control points distributed throughout the US volume. Our results show that if the US image acquisition is optimized for imaging of the diaphragm, high registration success rates are achievable.
Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.
Targeted fluorescence imaging agents such as IntegriSense 680 can be used to label integrin α<sub>v</sub>β<sub>3</sub> expressed
in tumor cells and to distinguish tumor from normal tissues. Coupled with endomicroscopy and image-guided
intervention devices, fluorescence contrast captured from the fiber-optic imaging technique can be used in a
Minimally Invasive Multimodality Image Guided (MIMIG) system for on-site peripheral lung cancer diagnosis. In
this work, we propose an automatic quantification approach for IntegriSense-based fluorescence endomicroscopy
image sequences. First, a sliding time-window is used to calculate the histogram of the frames at a given timepoint,
also denoted as the IntegriSense signal. The intensity distributions of the endomicroscopy image sequences
can be briefly classified into three groups: high, middle and low intensities, which might correspond to tumor,
normal tissue, and background (air) tissues within the lungs, respectively. At a given time-point, the histogram
calculated from the sliding time-window is fit with a Gaussian mixture model, and the average and standard
deviation (std), as well as the weight of each Gaussian distribution can be identified. Finally, a threshold can
be used to the weighting parameter of the high intensity group for tumor information detection. This algorithm
can be used as an automatic tumor detection tool from IntegriSense-based endomicroscopy. In experiments,
we validated the algorithm using 20 IntegriSense-based fluorescence endomicroscopy image sequences collected
from 6 rabbit experiments, where VX2 tumor was implanted into the lung of each rabbit, and image-guided
endomicroscopy was performed. The automatic classification results were compared with manual results, and
high sensitivity and specificity were obtained.
Electromagnetic (EM) tracking has been recognized as a valuable tool for locating the interventional devices in
procedures such as lung and liver biopsy or ablation. The advantage of this technology is its real-time connection
to the 3D volumetric roadmap, i.e. CT, of a patient's anatomy while the intervention is performed. EM-based
guidance requires tracking of the tip of the interventional device, transforming the location of the device onto
pre-operative CT images, and superimposing the device in the 3D images to assist physicians to complete the
procedure more effectively. A key requirement of this data integration is to find automatically the mapping
between EM and CT coordinate systems. Thus, skin fiducial sensors are attached to patients before acquiring
the pre-operative CTs. Then, those sensors can be recognized in both CT and EM coordinate systems and used
calculate the transformation matrix. In this paper, to enable the EM-based navigation workflow and reduce
procedural preparation time, an automatic fiducial detection method is proposed to obtain the centroids of the
sensors from the pre-operative CT. The approach has been applied to 13 rabbit datasets derived from an animal
study and eight human images from an observation study. The numerical results show that it is a reliable and
efficient method for use in EM-guided application.
CT-fluoroscopy (CTF) is an efficient imaging method for guiding percutaneous lung interventions such as biopsy.
During CTF-guided biopsy procedure, four to ten axial sectional images are captured in a very short time period to
provide nearly real-time feedback to physicians, so that they can adjust the needle as it is advanced toward the target
lesion. Although popularly used in clinics, this traditional CTF-guided intervention procedure may require frequent scans
and cause unnecessary radiation exposure to clinicians and patients. In addition, CTF only generates limited slices of
images and provides limited anatomical information. It also has limited response to respiratory movements and has
narrow local anatomical dynamics. To better utilize CTF guidance, we propose a fast CT-CTF registration algorithm
with respiratory motion estimation for image-guided lung intervention using electromagnetic (EM) guidance. With the
pre-procedural exhale and inhale CT scans, it would be possible to estimate a series of CT images of the same patient at
different respiratory phases. Then, once a CTF image is captured during the intervention, our algorithm can pick the best
respiratory phase-matched 3D CT image and performs a fast deformable registration to warp the 3D CT toward the CTF.
The new 3D CT image can be used to guide the intervention by superimposing the EM-guided needle location on it.
Compared to the traditional repetitive CTF guidance, the registered CT integrates both 3D volumetric patient data and
nearly real-time local anatomy for more effective and efficient guidance. In this new system, CTF is used as a nearly
real-time sensor to overcome the discrepancies between static pre-procedural CT and the patient's anatomy, so as to
provide global guidance that may be supplemented with electromagnetic (EM) tracking and to reduce the number of CTF
scans needed. In the experiments, the comparative results showed that our fast CT-CTF algorithm can achieve better
Computed Tomography (CT) has been widely used for assisting in lung cancer detection/diagnosis and treatment.
In lung cancer diagnosis, suspect lesions or regions of interest (ROIs) are usually analyzed in screening
CT scans. Then, CT-based image-guided minimally invasive procedures are performed for further diagnosis
through bronchoscopic or percutaneous approaches. Thus, ROI segmentation is a preliminary but vital step
for abnormality detection, procedural planning, and intra-procedural guidance. In lung cancer diagnosis, such
ROIs can be tumors, lymph nodes, nodules, etc., which may vary in size, shape, and other complication phenomena.
Manual segmentation approaches are time consuming, user-biased, and cannot guarantee reproducible
results. Automatic methods do not require user input, but they are usually highly application-dependent. To
counterbalance among efficiency, accuracy, and robustness, considerable efforts have been contributed to semi-automatic
strategies, which enable full user control, while minimizing human interactions. Among available
semi-automatic approaches, the live-wire algorithm has been recognized as a valuable tool for segmentation of
a wide range of ROIs from chest CT images. In this paper, a new 3D extension of the traditional 2D live-wire
method is proposed for 3D ROI segmentation. In the experiments, the proposed approach is applied to a set of
anatomical ROIs from 3D chest CT images, and the results are compared with the segmentation derived from
a previous evaluated live-wire-based approach.
Central-chest lymph nodes play a vital role in lung-cancer staging. The three-dimensional (3D) definition of
lymph nodes from multidetector computed-tomography (MDCT) images, however, remains an open problem.
This is because of the limitations in the MDCT imaging of soft-tissue structures and the complicated phenomena
that influence the appearance of a lymph node in an MDCT image. In the past, we have made significant efforts
toward developing (1) live-wire-based segmentation methods for defining 2D and 3D chest structures and (2)
a computer-based system for automatic definition and interactive visualization of the Mountain central-chest
lymph-node stations. Based on these works, we propose new single-click and single-section live-wire methods
for segmenting central-chest lymph nodes. The single-click live wire only requires the user to select an object
pixel on one 2D MDCT section and is designed for typical lymph nodes. The single-section live wire requires
the user to process one selected 2D section using standard 2D live wire, but it is more robust. We applied
these methods to the segmentation of 20 lymph nodes from two human MDCT chest scans (10 per scan) drawn
from our ground-truth database. The single-click live wire segmented 75% of the selected nodes successfully
and reproducibly, while the success rate for the single-section live wire was 85%. We are able to segment the
remaining nodes, using our previously derived (but more interaction intense) 2D live-wire method incorporated
in our lymph-node analysis system. Both proposed methods are reliable and applicable to a wide range of
pulmonary lymph nodes.
Lung cancer is the leading cause of cancer death in the United States. In lung-cancer staging, central-chest
lymph nodes and associated nodal stations, as observed in three-dimensional (3D) multidetector CT (MDCT)
scans, play a vital role. However, little work has been done in relation to lymph nodes, based on MDCT data,
due to the complicated phenomena that give rise to them. Using our custom computer-based system for 3D
MDCT-based pulmonary lymph-node analysis, we conduct a detailed study of lymph nodes as depicted in 3D
MDCT scans. In this work, the Mountain lymph-node stations are automatically defined by the system. These
defined stations, in conjunction with our system's image processing and visualization tools, facilitate lymph-node
detection, classification, and segmentation. An expert pulmonologist, chest radiologist, and trained technician
verified the accuracy of the automatically defined stations and indicated observable lymph nodes. Next, using
semi-automatic tools in our system, we defined all indicated nodes. Finally, we performed a global quantitative
analysis of the characteristics of the observed nodes and stations. This study drew upon a database of 32 human
MDCT chest scans. 320 Mountain-based stations (10 per scan) and 852 pulmonary lymph nodes were defined
overall from this database. Based on the numerical results, over 90% of the automatically defined stations were
deemed accurate. This paper also presents a detailed summary of central-chest lymph-node characteristics for the
Lung cancer remains the leading cause of cancer death in the United States and is expected to account for nearly 30% of
all cancer deaths in 2007. Central to the lung-cancer diagnosis and staging process is the assessment of the central chest
lymph nodes. This assessment typically requires two major stages: (1) location of the lymph nodes in a three-dimensional
(3D) high-resolution volumetric multi-detector computed-tomography (MDCT) image of the chest; (2) subsequent nodal
sampling using transbronchial needle aspiration (TBNA). We describe a computer-based system for automatically locating
the central chest lymph-node stations in a 3D MDCT image. Automated analysis methods are first run that extract the
airway tree, airway-tree centerlines, aorta, pulmonary artery, lungs, key skeletal structures, and major-airway labels. This
information provides geometrical and anatomical cues for localizing the major nodal stations. Our system demarcates these
stations, conforming to criteria outlined for the Mountain and Wang standard classification systems. Visualization tools
within the system then enable the user to interact with these stations to locate visible lymph nodes. Results derived from
a set of human 3D MDCT chest images illustrate the usage and efficacy of the system.
The definition of regions of interests (ROIs), such as suspect cancer nodules or lymph nodes in 3D CT chest images, is often difficult because of the complexity of the phenomena that give rise to them. Manual slice tracing has been used widely for years for such problems, because it is easy to implement and guaranteed to work. But the manual method is extremely time-consuming, especially for high-solution 3D images which may have hundreds of slices, and it is subject to operator biases. Numerous automated image-segmentation methods have been proposed, but they are generally strongly application dependent, and even the "most robust" methods have difficulty in defining complex anatomical ROIs. To address this problem, the semi-automatic interactive paradigm referred to as "live wire" segmentation has been proposed by researchers. In live-wire segmentation, the human operator interactively defines an ROI's boundary guided by an active automated method which suggests what to define. This process in general is far faster, more reproducible and accurate than manual tracing, while, at the same time, permitting the definition of complex ROIs having ill-defined boundaries. We propose a 2D live-wire method employing an improved cost over previous works. In addition, we define a new 3D live-wire formulation that enables rapid definition of 3D ROIs. The method only requires the human operator to consider a few slices in general. Experimental results indicate that the new 2D and 3D live-wire approaches are efficient, allow for high reproducibility, and are reliable for 2D and 3D object segmentation.
The standard procedure for diagnosing lung cancer involves two
stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.