For 3D ultrasound (US) images with large slice thickness, high frequency information in the slice direction is missing and cannot be resolved through interpolation. As an ill-posed problem, current high-resolution methods rely on the presence of external/training atlases to learn the transform from low resolution images to high resolution images. In this study, we aim to propose a self-supervised learning method, which does not use any external atlas images, yet can still resolve high resolution images only reliant on the acquired image with a large slice thickness. To circumvent the lack of training data, the simulated training data were obtained from the input image. To do this, each 2D sagittal slice is regarded as a high-resolution image, while each coronal and axial slice is regarded as low-resolution images. By training a deep learning-based model on sagittal slices and using this model to infer high-resolution coronal and axial slices, we can apply the mapping to low-resolution images with large slice thickness to estimate the high-resolution images with thin slice thickness. The proposed algorithm was evaluated using 30 sets of US breast data. The US image downsampled in z-axis was used as low-resolution image, the original US image was used as ground truth. The normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) indices were used to quantify the accuracy of the proposed algorithm. The NMAE, PSNR and NCC were 0.011±0.02, 34.6±2.14 dB and 0.98±0.01. The proposed method showed similar image quality as compared to ground truth.
Glioblastoma (GBM) is the most frequent and lethal primary brain cancer. Due to its therapeutic resistance and aggressiveness, clinical management is challenging. This study aims to develop a machine-learning-based classification method using radiomic features of multiparametric MRI to correctly identify high grade (HG) and low grade (LG) GBMs. Multiparametric MRI of 50 patients with GBM, 25 HG and 25 LG, were used. Each patient has T1, contrast-enhanced T1, T2 and FLAIR MRI, as well as provided tumor contours. These tumor contours were used to extract features from the multiparametric MRI. Once these features have been extracted, the most significant and informative features were selected to train random forests to differentiate HG and LG GBMs while varying feature correlation limits were applied to remove redundant features. Then leave-one-out cross-validation random forests were applied to the dataset to classify HG or LG GBMs. The prediction accuracy, receiver operating characteristic (ROC) curves, and area under the curve (AUC) were obtained at each correlation limit to evaluate the performance of our machine-learning-based classification. The best performing parameters predicted on an average, a prediction accuracy of 0.920 or 46 out of 50 patients, 22/25 for HGG and 24/25 for LGG, consistently with an AUC of 0.962. We investigated the process of distinguishing between HG GBM and LG GBM using multiparametric MRI, radiomic features, and machine learning. The result of our study shows that grade of GBM could be predicted accurately and consistently using the proposed machine-learning-based radiomics approach.