Predicting pseudo CT images from MRI data has received increasing attention for use in MRI-only radiation therapy planning and PET-MRI attenuation correction, eliminating the need for harmful CT scanning. Current approaches focus on voxelwise mean absolute error (MAE) and peak signal-to-noise-ratio (PSNR) for optimization and evaluation. Contextual losses such as structural similarity (SSIM) are known to promote perceptual image quality. We investigate the use of these contextual losses for optimization.
Patch-based 3D fully convolutional neural networks (FCN) were optimized for prediction of pseudo CT images from 3D gradient echo pelvic MRI data and compared to ground truth CT data of 26 patients. CT data was non-rigidly registered to MRI for training and evaluation. We compared voxelwise L1 and L2 loss functions, with contextual multi-scale L1 and L2 (MSL1 and MSL2), and SSIM. Performance was evaluated using MAE, PSNR, SSIM and the overlap of segmented cortical bone in the reconstructions, by the dice similarity metric. Evaluation was carried out in cross-validation.
All optimizations successfully converged well with PSNR between 25 and 30 HU, except for one of the folds of SSIM optimizations. MSL1 and MSL2 are at least on par with their single-scale counterparts. MSL1 overcomes some of the instabilities of the L1 optimized prediction models. MSL2 optimization is stable, and on average, outperforms all the other losses, although quantitative evaluations based on MAE, PSNR and SSIM only show minor differences. Direct optimization using SSIM visually excelled in terms subjective perceptual image quality at the expense of a voxelwise quantitative performance drop.
Contextual loss functions can improve prediction performance of FCNs without change of the network architecture. The suggested subjective superiority of contextual losses in reconstructing local structures merits further investigations.
KEYWORDS: Image segmentation, Cartilage, Magnetic resonance imaging, Medical imaging, Bone, Machine learning, Algorithm development, Scanners, 3D image processing, 3D modeling
Many classification/segmentation tasks in medical imaging are particularly challenging for machine learning algorithms
because of the huge amount of training data required to cover biological variability. Learning methods scaling badly in
the number of training data points may not be applicable. This may exclude powerful classifiers with good generalization
performance such as standard non-linear support vector machines (SVMs). Further, many medical imaging problems
have highly imbalanced class populations, because the object to be segmented has only few pixels/voxels compared to
the background. This article presents a two-stage classifier for large-scale medical imaging problems. In the first stage,
a classifier that is easily trainable on large data sets is employed. The class imbalance is exploited and the classifier is
adjusted to correctly detect background with a very high accuracy. Only the comparatively few data points not identified as
background are passed to the second stage. Here a powerful classifier with high training time complexity can be employed
for making the final decision whether a data point belongs to the object or not. We applied our method to the problem of
automatically segmenting tibial articular cartilage from knee MRI scans. We show that by using nearest neighbor (kNN)
in the first stage we can reduce the amount of data for training a non-linear SVM in the second stage. The cascaded system
achieves better results than the state-of-the-art method relying on a single kNN classifier.
Classification is widely used in the context of medical image analysis and in order to illustrate the mechanism
of a classifier, we introduce the notion of an exaggerated image stereotype based on training data and trained
classifier. The stereotype of some image class of interest should emphasize/exaggerate the characteristic patterns
in an image class and visualize the information the employed classifier relies on. This is useful for gaining insight
into the classification and serves for comparison with the biological models of disease.
In this work, we build exaggerated image stereotypes by optimizing an objective function which consists of a
discriminative term based on the classification accuracy, and a generative term based on the class distributions.
A gradient descent method based on iterated conditional modes (ICM) is employed for optimization. We use
this idea with Fisher's linear discriminant rule and assume a multivariate normal distribution for samples within
a class. The proposed framework is applied to computed tomography (CT) images of lung tissue with emphysema.
The synthesized stereotypes illustrate the exaggerated patterns of lung tissue with emphysema, which is
underpinned by three different quantitative evaluation methods.
Osteoarthritis (OA) is a degenerative joint disease characterized by degradation of the articular cartilage, and is a
major cause of disability. At present, there is no cure for OA and currently available treatments are directed towards
relief of symptoms. Recently it was shown that cartilage homogeneity visualized by MRI and representing the
biochemical changes undergoing in the cartilage is a potential marker for early detection of knee OA. In this paper based
on homogeneity we present an automatic technique, embedded in a variational framework, for localization of a region of
interest in the knee cartilage that best indicates where the pathology of the disease is dominant. The technique is
evaluated on 283 knee MR scans. We show that OA affects certain areas of the cartilage more distinctly, and these are
more towards the peripheral region of the cartilage. We propose that this region in the cartilage corresponds anatomically
to the area covered by the meniscus in healthy subjects. This finding may provide valuable clues in the pathology and the
etiology of OA and thereby may improve treatment efficacy. Moreover our method is generic and may be applied to
other organs as well.
Numerous studies have investigated the relation between mammographic density and breast cancer risk. These
studies indicate that women with dense breasts have a four to six fold risk increase. There is currently no gold
standard for automatic assessment of mammographic density.
In previous work two different automated methods for measuring the effect of HRT w.r.t. changes in breast
density have been presented. One is a percentage density based on an adaptive global threshold, and the other is
an intensity invariant measure, which provides structural information orthogonal to intensity-based methods. In
this article we investigate the ability to detect density changes induced by HRT for these measures and compare
to a radiologist's BI-RADS rating and interactive threshold percentage density.
In the experiments, two sets of mammograms of 80 patients from a double blind, placebo controlled HRT
experiment are used. The p-values for the statistical significance of the separation of density means, for the HRT
group and the placebo group at end of study, are 0.2, 0.1, 0.02 and 0.02 for the automatic threshold, BI-RADS,
the stripyness and the interactive threshold respectively.
Numerous studies have investigated the relation between mammographic density and breast cancer risk. These studies indicate that women with high breast density have a four to six fold risk increase. An investigation of whether or not this relation is causal is important for, e.g., hormone replacement therapy (HRT), which has been shown to actually increase the density. No gold standard for automatic assessment of mammographic density exists. Manual methods such as Wolfe patterns and BI-RADS are helpful for communication of diagnostic sensitivity, but they are both time consuming and crude. They may be sufficient in certain cases and for single measurements, but for serial, temporal analysis it is necessary to be able to detect more subtle changes and, in addition, to be more reproducible. In this work an automated method for measuring the effect of HRT w.r.t. changes in biological density in the breast is presented. This measure is a novel measure, which provides structural information orthogonal to intensity-based methods. Hessian eigenvalues at different scales are used as features and a clustering of these is employed to divide a mammogram into four structurally different areas. Subsequently, based on the relative size of the areas, a density score is determined. In the experiments, two sets of mammograms of 50 patients from a double blind, placebo controlled HRT experiment were used. The change in density for the HRT group, measured with the new method, was significantly higher (p = 0.0002) than the change in the control group.
In this work we compare the performance of a number of vessel segmentation algorithms on a newly constructed retinal vessel image database. Retinal vessel segmentation is important for the detection of numerous eye diseases and plays an important role in automatic retinal disease screening systems. A large number of methods for retinal vessel segmentation have been published, yet an evaluation of these methods on a common database of screening images has not been performed. To compare the performance of retinal vessel segmentation methods we have constructed a large database of retinal images. The database contains forty images in which the vessel trees have been manually segmented. For twenty of those forty images a second independent manual segmentation is available. This allows for a comparison between the performance of automatic methods and the performance of a human observer. The database is available to the research community. Interested researchers are encouraged to upload their segmentation results to our website (http://www.isi.uu.nl/Research/Databases). The performance of five different algorithms has been compared. Four of these methods have been implemented as described in the literature. The fifth pixel classification based method was developed specifically for the segmentation of retinal vessels and is the only supervised method in this test. We define the segmentation accuracy with respect to our gold standard as the performance measure. Results show that the pixel classification method performs best, but the second observer still performs significantly better.
A computer-aided diagnosis scheme for the detection of interstitial disease in standard digital posteroanterior (PA) chest radiographs is presented. The detection technique is supervised-manually labelled data should be provided for training the algorithm-and fully automatic, and can be used as part of a computerized analysis
scheme for X-ray lung images.
Prior to the detection, a segmentation should be performed which delineates the lung field boundaries.
Subsequently, a quadratic decision rule is employed for every pixel within the lung fields to associate with each pixel a probabilistic measure indicating interstitial disease. The locally obtained per-pixel probabilities are fused to a single global probability indicating to what extent there is interstitial disease present in the image. Finally, a threshold on this quantity classifies the image as containing interstitial disease or not.
The probability combination scheme presented utilizes the quantiles of the local posterior probabilities to fuse the local probability into a global one. Using this nonparametric technique, reasonable results are obtained on the interstitial disease detection task. The area under the receiver operating characteristic equals 0.92 for the
optimal setting.
Supervised segmentation methods in which a model of the shape of an object and its gray-level appearance is used to segment new images have become popular techniques in medical image segmentation. However, the results of these methods are not always accurate enough. We show how to extend one of these segmentation methods, active shape models (ASM) so that user interaction can be incorporated. In this interactive shape model (iASM), a user drags points to their correct position thus guiding the segmentation process. Experiments for three medical segmentation tasks are presented: segmenting lung fields in chest radiographs, hand outlines in hand radiographs and thrombus in abdominal aorta aneurysms from CTA data. By only fixing a small number of points, the part of sufficiently accurate segmentations can be increased from 20-70% for no interaction to over 95%. We believe that iASM can be used in many clinical applications.
The task of segmenting the posterior ribs within the lung fields
is of great practical importance. For example, delineation of the
ribs may lead to a decreased number of false positives in
computerized detection of abnormalities, and hence analysis of
radiographs for computer-aided diagnosis purposes will benefit
from this.
We use an iterative, pixel-based, statistical classification
method---iterated contextual pixel classification (ICPC). It is
suited for a complex segmentation task in which a global shape
description is hard to provide. The method combines local gray
level and contextual information to come to an overall image
segmentation. Because of it generality, it is also useful for
other segmentation tasks. In our case, the variable number of
visible ribs in the lung fields complicates the use of a global
model. Additional difficulties arise from the poor visibility of
the lower and medial ribs.
Using cross validation, the method is evaluated on 35 radiographs
in which all posterior ribs were traced manually. ICPC obtains an
accuracy of 83%, a sensitivity of 79%, and a specificity of 86%
for segmenting the costal space. Further evaluation is done using
five manual segmentations from a second observer, whose
performance is compared with the five corresponding images from
the first manual segmentation, yielding 83% accuracy, 84%
sensitivity, and 83% specificity. On these five images, ICPC
attains 82%, 78%, and 86% respectively.
Segmentation of thrombus in abdominal aortic aneurysms is complicated by regions of low boundary contrast and by the presence of many neighboring structures in close proximity to the aneurysm wall. We present an automated method that is similar to the well known Active Shape Models (ASM), combining a three-dimensional shape model with a one-dimensional boundary appearance model. Our contribution is twofold: we developed a non-parametric appearance modeling scheme that effectively deals with a highly varying background, and we propose a way of generalizing models of curvilinear structures from small training sets.
In contrast with the conventional ASM approach, the new appearance model trains on both true and false examples of boundary profiles. The probability that a given image profile belongs to the
boundary is obtained using k nearest neighbor (kNN) probability density estimation. The performance of this scheme is compared to that of original ASMs, which minimize the Mahalanobis distance to the average true profile in the training set. The generalizability of the shape model is improved by modeling the objects axis deformation independent of its cross-sectional deformation.
A leave-one-out experiment was performed on 23 datasets. Segmentation using the kNN appearance model significantly outperformed the original ASM scheme; average volume errors were 5.9% and 46% respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.