In this work, we propose a deep U-Net based model to tackle the challenging task of prostate cancer segmentation by aggressiveness in MRI based on weak scribble annotations. This model extends the size constraint loss proposed by Kervadec et al.1 in the context of multiclass detection and segmentation task. This model is of high clinical interest as it allows training on prostate biopsy samples and avoids time-consuming full annotation process. Performance is assessed on a private dataset (219 patients) where the full ground truth is available as well as on the ProstateX-2 challenge database, where only biopsy results at different localisations serve as reference. We show that we can approach the fully-supervised baseline in grading the lesions by using only 6.35% of voxels for training. We report a lesion-wise Cohen’s kappa score of 0.29 ± 0.07 for the weak model versus 0.32 ± 0.05 for the baseline. We also report a kappa score (0.276 ± 0.037) on the ProstateX-2 challenge dataset with our weak U-Net trained on a combination of ProstateX-2 and our dataset, which is the highest reported value on this challenge dataset for a segmentation task to our knowledge.
We propose a Computer Assisted Diagnosis Interview (CADi) scheme for determining a likelihood measure of prostate
cancer presence in the peripheral zone (PZ) based on multisequence magnetic resonance imaging, including T2-weighted
(T2w), diffusion-weighted (DWI) and dynamic contrast-enhanced (DCE) MRI at 1.5 Tesla (T). Based on a feature set
derived from the gray level images, including first order statistics, Haralick's features, gradient features, semi-quantitative
and quantitative (pharmacokinetic modeling) dynamic parameters, we trained and compared four kinds of
classifiers: Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), k-Nearest Neighbours (KNN) and
Naïve Bayes (NB). The aim is twofold: we try to discriminate between the relevant features as well as creating an
efficient classifier using these features. The database consists of 23 radical prostatectomy patients. Using histologic
sections as the gold standard, both cancers and non-malignant tissues (suspicious and clearly benign) were annotated in
consensus on all MR images by two radiologists, a histopathologist and a researcher. Diagnostic performances were
evaluated based on a ROC curves analysis. From the outputs of all evaluated feature selection methods on the test bench,
we discriminated a restrictive set of about 20 highly informative features. Quantitative evaluation of the diagnostic
performance yielded to a maximal Area Under the ROC Curve (AUC) of 0.89. Moreover, the optimal CADi scheme
outperformed, in terms of specificity, our human experts in differentiating malignant from suspicious tissues, thus
demonstrating its potential for assisting cancer identification in the PZ.
In decision making processes where we have to deal with epistemic uncertainties, the Dempster-Shafer theory (DST) of
evidence and fuzzy logic have gained prominence as the methods of choice over traditional probabilistic methods. The
DST is unfortunately known to give wrong results in situations of high conflict. While some methods have been
proposed in the literature for improving the DST, such as the weighted DST which assumes that we have some
information about the relative reliabilities of the classifiers, we opted to incorporate fuzzy concepts in the DST
framework. This work was motivated by the desire to improve detection performance of a Computer-Aided Detection
(CAD) system under development for the detection of tumors in Positron Emission Tomography (PET) images by fusing
the outputs of multiple classifiers such as the SVM and LDA classifiers. A first implement based on a simple binary
fusion scheme gave a result of 69% true detections with an average of 2.5 false positive detections per 3D image (FPI).
These results prompted the use of the DST which resulted in 92% detection sensitivity and 25 FPI. As a way of further
reducing the false detections, we chose to tackle the limitations inherent to the DST by principally applying fuzzy
techniques in defining the hypotheses and experimenting with new combination rules. The best result of this modified
DST approach has been a 92% true tumor detection with 12 FPI; indicating a reduction by a factor of 2 of the false
detections while maintaining high sensitivity.
KEYWORDS: 3D image processing, 3D modeling, Positron emission tomography, Signal to noise ratio, Wavelets, 3D acquisition, Liver, Visualization, Oncology, Databases
This study evaluates new observer models for 3D whole-body Positron Emission Tomography (PET)
imaging based on a wavelet sub-band decomposition and compares them with the classical constant-Q CHO model. Our
final goal is to develop an original method that performs guided detection of abnormal activity foci in PET oncology
imaging based on these new observer models. This computer-aided diagnostic method would highly benefit to clinicians
for diagnostic purpose and to biologists for massive screening of rodents populations in molecular imaging. Method:
We have previously shown good correlation of the channelized Hotelling observer (CHO) using a constant-Q model
with human observer performance for 3D PET oncology imaging. We propose an alternate method based on combining
a CHO observer with a wavelet sub-band decomposition of the image and we compare it to the standard CHO
implementation. This method performs an undecimated transform using a biorthogonal B-spline 4/4 wavelet basis to
extract the features set for input to the Hotelling observer. This work is based on simulated 3D PET images of an
extended MCAT phantom with randomly located lesions. We compare three evaluation criteria: classification
performance using the signal-to-noise ratio (SNR), computation efficiency and visual quality of the derived 3D maps of
the decision variable &lgr;. The SNR is estimated on a series of test images for a variable number of training images for
both observers. Results: Results show that the maximum SNR is higher with the constant-Q CHO observer, especially
for targets located in the liver, and that it is reached with a smaller number of training images. However, preliminary
analysis indicates that the visual quality of the 3D maps of the decision variable &lgr; is higher with the wavelet-based
CHO and the computation time to derive a 3D &lgr;-map is about 350 times shorter than for the standard CHO. This
suggests that the wavelet-CHO observer is a good candidate for use in our guided detection method.
A dominant component of image quality for whole-body positron emission tomography (PET) imaging is attenuation, which is determined by patient thickness. This can be partially compensated for by adjusting scan duration. We evaluate the effect of changes in patient thickness and scan duration on lesion detection with model observers. We simulated 2D PET acquisitions of an anthropomorphic phantom with spherical target lesions. Three different anthropomorphic phantoms were used, with effective abdominal diameters of 20 cm, 27 cm, and 35 cm. The diameters of the lesions were varied from 1.0 to 3.0 cm, and the contrast ratios of the lesions were varied from 1.5 to 4.0. Noise-free scans were simulated with an analytical simulator. Poisson noise was added to simulate scan durations ranging from 1 to 10 minutes per bed position, using noise equivalent count rates previously measured using a modified NEMA NU2 countrate phantom. The average detectability of each target lesion under each condition was calculated using a non-prewhitening matched filter from 25 noisy realizations for each combination of parameters. Our results demonstrate the variation of the minimum scan duration required to detect a target of a given size and contrast ratio, for any fixed threshold of detectability. For image quality to remain constant for patients with larger cross-sectional areas, acquisition times should be increased accordingly, although in some cases this may not be possible due to practical constraints.
KEYWORDS: 3D modeling, 3D acquisition, Signal to noise ratio, Positron emission tomography, Performance modeling, 3D image processing, Smoothing, Data modeling, Target detection, Optical spheres
This work presents initial results of comparisons between planar and volumetric observer detection task performances for both human and model observers. Positron Emission Tomography (PET) imaging acquires and reconstructs tomographic images as contiguous volumetric (3D) images. Consequently physicians typically interpret these images by
searching the image volume using linked orthogonal planar images in the three standard orientations (transverse, sagittal, and coronal). Most of observer studies, however, have typically used planar images for evaluation. For human observer ROC studies, an observer scoring tool, similar to the display tool being used in clinical PET oncology imaging, has been developed. For model observer studies the non-prewhitening matched filter (NPWMF) and the channelized Hotelling
observer (CHO) were used to compute detectabilities as figures-of-merit for class separations. For the volumetric (3D)model observers, the entire image volume is used with appropriate 3D templates. For the planar (2D) model observers the transaxial plane centered on the target sphere is extracted and analyzed using 2D templates. Multiple realizations were generated using a non-Monte Carlo analytic simulator for feasible amount of simulation time and statistically
accurate noise properties. For comparisons, the correlations between each model observer and human observer performance are computed. The result showed that 3D model observers have a higher correlation with human observers than 2D observers do when axial smoothing is not applied. With axial smoothing, however, the correlation of 2D model
observers in general increased to the level of 3D model observer correlations with the human observer.
This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.