Computer-aided diagnosis plays an important role in clinical image diagnosis. Current clinical image classification tasks usually focus on binary classification, which need to collect samples for both the positive and negative classes in order to train a binary classifier. However, in many clinical scenarios, there may have many more samples in one class than in the other class, which results in the problem of data imbalance. Data imbalance is a severe problem that can substantially influence the performance of binary-class machine learning models. To address this issue, one-class classification, which focuses on learning features from the samples of one given class, has been proposed. In this work, we assess the one-class support vector machine (OCSVM) to solve the classification tasks on two highly imbalanced datasets, namely, space-occupying kidney lesions (including renal cell carcinoma and benign) data and breast cancer distant metastasis/non-metastasis imaging data. Experimental results show that the OCSVM exhibits promising performance compared to binary-class and other one-class classification methods.
Breast magnetic resonance imaging (MRI) plays an important role in high-risk breast cancer screening, clinical problemsolving, and imaging-based outcome prediction. Breast tumor segmentation in MRI is an essential step for quantitative radiomics analysis, where automated and accurate tumor segmentation is needed but very challenging. Automated breast tumor segmentation methods have been proposed and can achieve promising results. However, these methods still need a pre-defined a region of interest (ROI) before performing segmentation, which makes them hard to run fully automatically. In this paper, we investigated automated localization and segmentation method for breast tumor in breast Dynamic Contrast-Enhanced MRI (DCE-MRI) scans. The proposed method takes advantage of kinetic prior and deep learning for automatic tumor localization and segmentation. We implemented our method and evaluated its performance on a dataset consisting of 74 breast MR images. We quantitatively evaluated the proposed method by comparing the segmentation with the manual annotation from an expert radiologist. Experimental results showed that the automated breast tumor segmentation method exhibits promising performance with an average Dice Coefficient of 0.89±0.06.
Breast magnetic resonance imaging (MRI) plays an important role in high-risk breast cancer screening, clinical problemsolving, and imaging-based outcome prediction. Breast tumor segmentation in MRI is an essential step for quantitative radiomics analysis, where automated and accurate tumor segmentation is needed but very challenging. Manual tumor annotation by radiologists requires medical knowledge and is time-consuming, subjective, prone to error, and inter-user inconsistency. Several recent studies have shown the ability of deep-learning models in image segmentation. In this work, we investigated a deep-learning based method to segment breast tumors in Dynamic Contrast-Enhanced MRI (DCE-MRI) scans in both 2D and 3D settings. We implemented our method and evaluated its performance on a dataset of 1,246 breast MR images by comparing the segmentation to the manual annotations from expert radiologists. Experimental results showed that the deep-learning-based methods exhibit promising performance with the best Dice Coefficient of 0.92 ± 0.02.
In this study, we proposed a multi-space-enabled deep learning modeling method for predicting Oncotype DX recurrence risk categories from digital mammogram images on breast cancer patients. Our study included 189 estrogen receptor-positive (ER+) and node-negative invasive breast cancer patients, who all have Oncotype DX recurrence risk score available. Breast tumors were segmented manually by an expert radiologist. We built a 3- channel convolutional neural network (CNN) model that accepts three-space tumor data: the spatial intensity information and the phase and amplitude components in the frequency domain. We compared this multi-space model to a baseline model that is based on sorely the intensity information. Classification accuracy is based on 5- fold cross-validation and average area-under the receiver operating characteristics curve (AUC). Our results showed that the 3-channel multi-space CNN model achieved a statistically significant improvement than the baseline model.