Portable chest radiographic images play a critical role in examining and monitoring the condition and progress of critically ill patients in intensive care units (ICUs). For example, portable chest images are acquired to ensure that tubes inserted into the patients are properly positioned for effective treatment. In this paper, we present a system that automatically detects the position of an endotracheal tube (ETT), which is inserted into the trachea to assist patients who have difficulty breathing. The computer detection includes the detections of the lung field, spine line, and aortic arch. These detections lead to the identification of regions of interest (ROIs) used for the subsequent detection of the ETT and carina. The detection of the ETT and carina is performed within the ROIs. Our ETT and carina detection methods were trained and tested on a large number of images. The locations of the ETT and carina were confirmed by an experienced radiologist for the purpose of performance evaluation. Our ETT detection achieved an average sensitivity of 85% at less than 0.1 false-positive detections per image. The carina approach correctly identified the carina location within a 10 mm distance from the truth location for 81% of the 217 testing images. We expect our system will assist ICU clinicians to detect malpositioned ETTs and reposition malpositioned ETTs more effectively and efficiently.
High-contrast bone structures are a major noise contributor in chest radiographic images. A signal of interest in a chest radiograph could be either partially or completely obscured or “overshadowed” by the highly contrasted bone structures in its surrounding. Thus, removing the bone structures, especially the posterior rib and clavicle structures, is highly desirable to increase the visibility of soft tissue density. We developed an innovative technology that offers a solution to suppress bone structures, including posterior ribs and clavicles, on conventional and portable chest X-ray images. The bone-suppression image processing technology includes five major steps: 1) lung segmentation, 2) rib and clavicle structure detection, 3) rib and clavicle edge detection, 4) rib and clavicle profile estimation, and 5) suppression based on the estimated profiles. The bone-suppression software outputs an image with both the rib and clavicle structures suppressed. The rib suppression performance was evaluated on 491 images. On average, 83.06% (±6.59%) of the rib structures on a standard chest image were suppressed based on the comparison of computer-identified rib areas against hand-drawn rib areas, which is equivalent to about an average of one rib that is still visible on a rib-suppressed image based on a visual assessment. Reader studies were performed to evaluate reader performance in detecting lung nodules and pneumothoraces with and without a bone-suppression companion view. Results from reader studies indicated that the bone-suppression technology significantly improved radiologists’ performance in the detection of CT-confirmed possible nodules and pneumothoraces on chest radiographs. The results also showed that radiologists were more confident in making diagnoses regarding the presence or absence of an abnormality after rib-suppressed companion views were presented
The average workload per full-time equivalent (FTE) radiologist increased by 70% from1991-1992 to 2006-
2007. The increase is mainly due to the increase (34%) in the number of procedures, particularly in 3D imaging
procedures. New technologies such as picture archiving and communication systems (PACS) and embodied viewing
capability were accredited for an improved workflow environment leading to the increased productivity. However, the
need for further workflow improvement is still in demand as the number of procedures continues growing. Advanced and
streamlined viewing capability in PACS environment could potentially reduce the reading time, thus further increasing
the productivity. With the increasing number of 3D image procedures, radiographic procedures (excluding
mammography) have remained their critical roles in screening and diagnosis of various diseases. Although radiographic
procedures decreased in shares from 70% to 49.5%, the total number has remained the same from 1991-1992 to 2006-
2007. Inconsistency in image quality for radiographic images has been identified as an area of concern. It affects the
ability of clinicians to interpret images effectively and efficiently in areas where diagnosis, for example, in screening
mammography and portable chest radiography, requires a comparison of current images with priors. These priors can
potentially have different image quality. Variations in image acquisition techniques (x-ray exposure), patient and
apparatus positioning, and image processing are the factors attributed to the inconsistency in image quality. The
inconsistency in image quality, for example, in contrast may require manual image manipulation (i.e., windowing and
leveling) of images to accomplish an optimal comparison to detect the subtle changes. We developed a tone-scale image
rendering technique which improves the image consistency of chest images across time and modality. The rendering
controls both the global and local contrast for a consistent look. We expect the improvement could reduce the window
and level manipulation time required for an optimal comparison of priors and current images, thus improving both the
efficiency and effectiveness of image interpretation. This paper presents a technique for improving the consistency of
portable chest radiographic images. The technique is based on regions-of-interest (ROIs) to control both the local and
global contrast consistency.
In intensive care units (ICU), endotracheal (ET) tubes are inserted to assist patients who may have difficulty breathing.
A malpositioned ET tube could lead to a collapsed lung, which is life threatening. The purpose of this study is to
develop a new method that automatically detects the positioning of ET tubes on portable chest X-ray images. The
method determines a region of interest (ROI) in the image and processes the raw image to provide edge enhancement for
further analysis. The search of ET tubes is performed within the ROI. The ROI is determined based upon the analysis of
the positions of the detected lung area and the spine in the image. Two feature images are generated: a Haar-like image
and an edge image. The Haar-like image is generated by applying a Haar-like template to the raw ROI or the enhanced
version of the raw ROI. The edge image is generated by applying a direction-specific edge detector. Both templates are
designed to represent the characteristics of the ET tubes. Thresholds are applied to the Haar-like image and the edge
image to detect initial tube candidates. Region growing, combined with curve fitting of the initial detected candidates, is
performed to detect the entire ET tube. The region growing or "tube growing" is guided by the fitted curve of the initial
candidates. Merging of the detected tubes after tube growing is performed to combine the detected broken tubes. Tubes
within a predefined space can be merged if they meet a set of criteria. Features, such as width, length of the detected
tubes, tube positions relative to the lung and spine, and the statistics from the analysis of the detected tube lines, are
extracted to remove the false-positive detections in the images. The method is trained and evaluated on two different
databases. Preliminary results show that computer-aided detection of tubes in portable chest X-ray images is promising.
It is expected that automated detection of ET tubes could lead to timely detection of malpositioned tubes, thus improve
overall patient care.
The purpose of this study is to identify the difference in nodule characteristics manifested on computed topography (CT) and X-ray images and to evaluate the ability of radiographic features to differentiate between benign and malignant nodules, when compared to the features extracted from CT. We collected 79 consecutive computed radiographic (CR) chest images with one or more CT-documented lung nodules. Upon viewing the CT slices, corresponding nodules were localized on CR images by an experienced chest radiologist. Of the 79 CT nodules (19 benign, 60 malignant), 61 (14 benign, 47 malignant) were considered to be definitely visible on the CR, and the rest were considered to be invisible or did not qualify for distinct feature assessment. Eleven nodule features each were visually extracted from CT and CR images. These features were used to characterize the nodule in terms of size, shape, lobulation, spiculation, density, etc. Correlation between the CT and CR features was calculated for the 61 definitely CR-visible nodules. Receiver operating characteristics (ROC) analysis was performed to evaluate the ability of these features in the task of differentiating between benign and malignant nodules. Results showed that CR and CT images agreed well in characterizing nodules in terms of shape, lobulation, spiculation and density features. We found that 40-50% of the cases had same CR and CT ratings and 41-51% of cases were rated by a difference of one between their ratings on CT and CR for shape (3-point scale), lobulation (4-point scale) and speculation (4-point scale) features. Ninety-two percent of the cases had same CT and CR ratings on the density feature. Size yielded a correlation coefficient of 0.84. In the task of differentiating between benign and malignant lung nodules, ROC analysis of individual features yielded an Az value ranging from 0.52 to 0.77 for the 14 CT features and from 0.52 to 0.75 for the CR features. In addition, we examined the characteristics of the 18 nodules that were excluded from feature analysis. On average, these 18 nodules were smaller in size (15.2 mm measured from CT) than the 61 CR-visible nodules (23.5 mm). We found that CR features agreed reasonably well with CT features and their ability to differentiate between benign and malignant nodules were similar to that of the CT features.
We compared computerized methods that incorporate automated lesion characterization and methods for the assessment of the breast parenchymal pattern on mammograms in order to better predict the pathological status of a breast lesion. Computer-extracted mass feature automatically characterized the shape, spiculation, contrast, and margin of each lesion. On the digitized mammogram of the contralateral breast, computer-extracted texture features were automatically extracted to characterize the radiographic breast parenchymal patterns. Three approaches were investigated. A computerized risk-modulated analysis system for mammographic images is expected to improve characterization of lesions by incorporating cancer-risk information into the decision-making process.
Mammographic parenchymal patterns have been shown to be associated with breast cancer risk. Fractal-based texture analyses, including box-counting methods and Minkowski dimension, were performed within parenchymal regions of normal mammograms of BRCA1/BRCA2 gene mutation carriers and within those of women at low risk for developing breast cancer. Receiver Operating Characteristic (ROC) analysis was used to assess the performance of the computerized radiographic markers in the task of distinguishing between high and low-risk subjects. A multifractal phenomenon was observed with the fractal analyses. The high frequency component of fractal dimension from the conventional box-counting technique yielded an Az value of 0.84 in differentiating between two groups, while using the LDA to estimate the fractal dimension yielded an Az value of 0.91 for the high frequency component. An Az value of 0.82 was obtained with fractal dimensions extracted using the Minkowski algorithm.
Linear step-wise feature selection is performed for computerized analysis methods on a set of mammography features using a database of mammography cases, a set of ultrasound features using a database of ultrasound cases, and a set of mammography and sonography features using a multi- modality database of lesions with both mammograms and sonograms. The large mammography and sonography databases were randomly split 20 times into three subdatabases for feature selection, classifier training and independent validation. The average validation Az value over the 20 random splits for the mammography database was 0.82 +/- 0.04 and for the sonography database was 0.85 +/- 0.03. The average consistency feature selection Az value for the mammography and sonography databases were 0.87 +/- 0.02 and 0.88 +/- 0.02, respectively. For the multi-modality database, the consistency feature selection Az value was 0.93.
While investigators have been successful in developing methods for the computerized analysis of mammograms and ultrasound images, optimal output strategies for the effective and efficient use of such computer analyses are still undetermined. We have incorporated our computerized mass classification method into an intelligent workstation interface that displays known malignant and benign cases similar to lesions in question using a color-coding scheme that allows instant visual feedback to the radiologist. The probability distributions of the malignant and benign cases in the known database are also graphically displayed along with the graphical location of the unknown case relative to these two distributions. The effect of the workstation on radiologists' performance was demonstrated with two preliminary studies. In each study, participants were asked to interpret cases without and with the computer output as an aid for diagnosis. Results from our demonstration studies indicate that radiologists' performance, especially specificity, increases with the use of the aid.
One potential limitation of computer-aided diagnosis (CAD) studies is that a computerized method may be trained and tested on a database comprised of a limited number of cases. Thus, the performance of the CAD method may depend on the subtlety of the lesions (i.e., the case mix) in the database. The purpose of this study is to evaluate the effect of case-mix on feature selection and the performance of a computerized classification method trained on a limited database.
We have developed computerized methods for the analysis of lesions that combine results from different imaging modalities, in this case digitized mammograms and sonograms of the breast, for distinguishing between malignant and benign lesions. The computerized classification method -- applied here to mass lesions seen on both digitized mammograms and sonograms, includes: (1) automatic lesion extraction, (2) automated feature extraction, and (3) automatic classification. The results for both modalities are then merged into an estimate of the likelihood of malignancy. For the mammograms, computer-extracted lesion features include degree of spiculation, margin sharpness, lesion density, and lesion texture. For the ultrasound images, lesion features include margin definition, texture, shape, and posterior acoustic attenuation. Malignant and benign lesions are better distinguished when features from both mammograms and ultrasound images are combined.
We developed a computerized method for the automated classification of benign and malignant mammographic mass lesions. An independent evaluation of the automatic method on a database consisting of 110 cases showed that the classification method is robust to variations in case-mix and in film digitization technique. We also evaluated the effectiveness of the method as an aid to radiologists in differentiating between benign and malignant masses. A total of 6 mammographers and 6 community general radiologists participated in an observer study. In that study, the radiologists interpreted the 110 cases in the independent database, unknown to both the radiologist observers and the trained computer classification method, first without and then with the computer aid. Results from our observer study indicated that use of the computer aid improved the abilities for both the expert and general radiologists in the task of differentiating between benign and malignant mammographic mass lesions, as indicated by the increase in Az values and sensitivities at statistically significant levels. With the database we used, however, we were unable to demonstrate the effect of computer aid on radiologist performance regarding the number of benign cases sent for biopsy. In this study, we investigate the relationship between the value of the computer output and the effect on the observers in terms of changing their patient management decision upon viewing the computer output.
We previously developed a computerized method to classify mammographic masses as benign or malignant. In this method, mammographic features that are similar to the ones used by radiologists are automatically extracted to characterize a mass lesion. These features are then merged by an artificial neural network (ANN), which yields an estimated likelihood of malignancy for each mass. The performance of the method was evaluated on an independent database consisting of 110 cases (60 benign and 50 malignant cases). The method achieved an Az of 0.91 from round-robin analysis in the task of differentiating between benign and malignant masses using the computer-extracted features only. As the most important clinical risk factor for breast cancer, age achieved a performance level (Az equals 0.79) similar to that (Az equals 0.77 and 0.80) of the computer-extracted spiculation features, which are the most important indicators for malignancy of a mass, in differentiating between the malignant and benign cases. In this study, age is included as an additional input feature to the ANN. The performance of the scheme (Az equals 0.93) is improved when age is included. However, the improvement is not found to be statistically significant. Our results indicated that age may be a strong feature in predicting malignancy of a mass. For this database, however, the inclusion of age may not have a strong impact on the determination of the likelihood for a mammographic mass lesion when the major mammographic characteristics (e.g., spiculation) of a mass are accurately extracted and analyzed along with other features using an artificial neural network.
We have developed a computerized method for the automatic segmentation of mass lesions on digitized mammograms using gray-level region-growing. This segmentation technique has been incorporated into our automated classification scheme which consists of (1) automated segmentation (2) automated feature-extraction and (3) determination of likelihood of malignancy using an automated classifier. The feature- extraction techniques extract various features from the neighborhoods of the computer-grown mass region to characterize the margin, shape and density of the mass. The automated classifier is then used to merge these computer- extracted features into a number related to the likelihood of malignancy. To evaluate quantitatively the performance of the segmentation technique, we calculate the area of overlap between the computer-grown mass regions and radiologist- identified mass regions. In addition, we substitute the computer-identified margins with radiologist-identified margins in our classification scheme. The performances of individual features as well as the classification scheme in terms of their ability to differentiate between benign and malignant masses are evaluated using receiver operating characteristic (ROC) analysis. The performance obtained based on the mass regions identified by the automated segmentation technique and by radiologists are compared to evaluate the adequacy of the region growing. Results from this study show that the automated segmentation technique tends to undergrow the mass regions by approximately one quarter of the area identified by the radiologists. However, the superior performances of the computer-extracted features and the classification scheme based on the analysis of the computer- grown mass regions indicated that the computer-grown mass regions are sufficient for the subsequent techniques of feature-extraction and classification to accurately characterize mass lesions.
Artificial neural networks have been applied to the differentiation of masses from false- positive detections in digital mammograms. A database of 110 pairs of digital mammograms containing a total of 102 masses (54 malignant, 48 benign) was utilized in this study. Three- hundred-two false positive regions were selected from these images to be used in the training of the artificial neural network. Over 90 features were calculated for both the true masses and the false positives. Features that showed the most separation, in a one-dimensional analysis, between true positives and false positives were selected for artificial neural network input. A three level feed-forward neural network was used with one input layer, one hidden layer and an output layer. By varying the structure and learning rate of the neural network an optimal structure was found. The performance of the ANN was evaluated by means of receiver operating characteristic (ROC) analysis and free-response receiver operating characteristic (FROC) analysis. Results from a round robin evaluation yielded an Az of 0.97 in the task of differentiating between masses and false-positive detections. In the future, multi-dimensional feature analysis will be performed to obtain the optimal performance using a combination of rule-based decision making along with artificial neural networks.
We are developing computer-aided diagnosis (CAD) schemes for the detection of clustered microcalcifications and masses in digital mammograms. Here, CAD refers to a diagnosis made by a radiologist who uses the computerized analyses of radiographic images as a 'second opinion'. The radiologist would make the final diagnostic decision. The aim of CAD is to improve diagnostic accuracy by reducing the number of missed diagnoses. In this preliminary evaluation, 30 clinical cases from December 1991 having a focal mammographic finding were analyzed.
We are developing an 'intelligent' workstation to assist radiologists in diagnosing breast cancer from mammograms. The hardware for the workstation will consist of a film digitizer, a high speed computer, a large volume storage device, a film printer, and 4 high resolution CRT monitors. The software for the workstation is a comprehensive package of automated detection and classification schemes. Two rule-based detection schemes have been developed, one for breast masses and the other for clustered microcalcifications. The sensitivity of both schemes is 85% with a false-positive rate of approximately 3.0 and 1.5 false detections per image, for the mass and cluster detection schemes, respectively. Computerized classification is performed by an artificial neural network (ANN). The ANN has a sensitivity of 100% with a specificity of 60%. Currently, the ANN, which is a three-layer, feed-forward network, requires as input ratings of 14 different radiographic features of the mammogram that were determined subjectively by a radiologist. We are in the process of developing automated techniques to objectively determine these 14 features. The workstation will be placed in the clinical reading area of the radiology department in the near future, where controlled clinical tests will be performed to measure its efficacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.