Breast magnetic resonance imaging (MRI) plays an important role in high-risk breast cancer screening, clinical problemsolving, and imaging-based outcome prediction. Breast tumor segmentation in MRI is an essential step for quantitative radiomics analysis, where automated and accurate tumor segmentation is needed but very challenging. Automated breast tumor segmentation methods have been proposed and can achieve promising results. However, these methods still need a pre-defined a region of interest (ROI) before performing segmentation, which makes them hard to run fully automatically. In this paper, we investigated automated localization and segmentation method for breast tumor in breast Dynamic Contrast-Enhanced MRI (DCE-MRI) scans. The proposed method takes advantage of kinetic prior and deep learning for automatic tumor localization and segmentation. We implemented our method and evaluated its performance on a dataset consisting of 74 breast MR images. We quantitatively evaluated the proposed method by comparing the segmentation with the manual annotation from an expert radiologist. Experimental results showed that the automated breast tumor segmentation method exhibits promising performance with an average Dice Coefficient of 0.89±0.06.
Breast magnetic resonance imaging (MRI) plays an important role in high-risk breast cancer screening, clinical problemsolving, and imaging-based outcome prediction. Breast tumor segmentation in MRI is an essential step for quantitative radiomics analysis, where automated and accurate tumor segmentation is needed but very challenging. Manual tumor annotation by radiologists requires medical knowledge and is time-consuming, subjective, prone to error, and inter-user inconsistency. Several recent studies have shown the ability of deep-learning models in image segmentation. In this work, we investigated a deep-learning based method to segment breast tumors in Dynamic Contrast-Enhanced MRI (DCE-MRI) scans in both 2D and 3D settings. We implemented our method and evaluated its performance on a dataset of 1,246 breast MR images by comparing the segmentation to the manual annotations from expert radiologists. Experimental results showed that the deep-learning-based methods exhibit promising performance with the best Dice Coefficient of 0.92 ± 0.02.
Identification of malignancy and false recalls (women who are recalled in screening for additional workup, but later proven benign) in screening mammography has significant clinical value for accurate diagnosis of breast cancer. Deep learning methods have recently shown success in the area of medical imaging classification. However, there are a multitude of different training strategies that can significantly impact the overall model performance for a specific classification task. In this study, we aimed to investigate the impact of training strategy on classification of digital mammograms by performing a robustness analysis of deep learning models to distinguish malignancy and false-recall from normal (benign) findings. Specifically, we employed several pre-training strategies including transfer learning with medical and non-medical datasets, layer freezing, and varied network structure on both binary and three-class classification tasks of digital mammography images. We found that, overall, deep learning models appear to be robust to some modifications of network structure and pre-training strategy that we tested for mammogram-specific classification tasks. However, for specific classification tasks, some training strategies offer performance gains. The most notable performance gains in our experiments involved residual network models.
The essential sequences in breast magnetic resonance imaging (MRI) are the dynamic contrast-enhanced (DCE) images, which are widely used in clinical settings. Diffusion-weighted imaging (DWI) MRI also plays an important role in many diagnostic applications and in developing novel imaging bio-makers. Compared to DCE MRI, technical advantages of DWI include a shorter acquisition time, no need for administration of any contrast agent, and availability on most commercial scanners. Segmenting the whole-breast region is an essential pre-processing step in many quantitative and radiomics breast MRI studies. However, it is a challenging task for computerized methods due to the low contrast of intensity along breast chest wall boundaries. While several studies have reported computational methods for automated whole-breast segmentation in DCE MRI, the segmentation in DWI MRI is still underdeveloped. In this paper, we propose to use deep learning and transfer learning methods to segment the whole-breast in DWI MRI, by leveraging pretraining on a DCE MRI dataset. Experiments are reported in multiple breast MRI datasets including an external evaluation dataset and encouraging results are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.