Background. Prostate segmentation is a crucial step in computer-aided systems for prostate cancer detection. Multi-planar acquisitions are commonly used by clinicians to obtain a more accurate patient diagnosis but their relevance in prostate segmentation using fully automated algorithms has not been assessed. To date, the limited assessment of this relevance stems from the fact that both axial and sagittal prostate imaging views, as opposed to a single view, doubles the acquisition time. In this work, we assess the relevance of multi-planar imaging for prostate segmentation within a deep learning segmentation framework. Materials and Methods. We propose a deep learning prostate segmentation framework either from either axial or from axial and sagittal T2-weighted magnetic resonance images (MRI). The system is based on an ensemble of convolutional neural networks, each independently trained on a single imaging view. We compare single-view (axial) segmentations to those obtained from two imaging views (axial and sagittal) to assess the relevance of using multi-planar acquisitions. Algorithm performance assessment will be two-fold: 1) the global DICE score between the algorithm’s predictions and the segmentations of an experienced reader will be computed and 2) the number of lesions located within the algorithm’s segmentation prediction will be calculated. A subset of 80 patients from the public PROSTATEx-2 database containing both axial and sagittal T2-weighted MRIs will be used for this study. Results. The multiplanar network outperformed the network trained on only axial views according to both the proposed metrics. A statistically significant increase of 4% in DICE scores was found along with an 9% increase in the number of lesions within the predicted segmentation. Conclusions. The proposed method allows for a fully automatic segmentation of the prostate from single- or multi-view MRI and assesses the relevance of multi-planar MRI acquisitions for fully automatic prostate segmentation algorithms.
Background The extraction and analysis of image features (radiomics) is a promising field in the precision medicine era,
with applications to prognosis, prediction, and response to treatment quantification. In this work, we present a mutual
information – based method for quantifying reproducibility of features, a necessary step for qualification before their
inclusion in big data systems.
Materials and Methods Ten patients with Non-Small Cell Lung Cancer (NSCLC) lesions were followed over time (7
time points in average) with Computed Tomography (CT). Five observers segmented lesions by using a semi-automatic
method and 27 features describing shape and intensity distribution were extracted. Inter-observer reproducibility was
assessed by computing the multi-information (MI) of feature changes over time, and the variability of global extrema.
Results The highest MI values were obtained for volume-based features (VBF). The lesion mass (M), surface to volume
ratio (SVR) and volume (V) presented statistically significant higher values of MI than the rest of features. Within the
same VBF group, SVR showed also the lowest variability of extrema. The correlation coefficient (CC) of feature values
was unable to make a difference between features.
Conclusions MI allowed to discriminate three features (M, SVR, and V) from the rest in a statistically significant manner.
This result is consistent with the order obtained when sorting features by increasing values of extrema variability. MI is a
promising alternative for selecting features to be considered as surrogate biomarkers in a precision medicine context.