You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
16 March 2020Using ResNet feature extraction in computer-aided diagnosis of breast cancer on 927 lesions imaged with multiparametric MRI
In this study, we aim to develop a multiparametric breast MRI computer-aided diagnosis (CADx) methodology using residual neural network (ResNet) deep transfer learning to incorporate information from both dynamic contrast-enhanced (DCE)-MRI and T2-weighted (T2w) MRI in the task of distinguishing between benign and malignant breast lesions. This retrospective study included 927 unique lesions from 616 women who underwent breast MR exams. A pre-trained ResNet50 was used to extract features from the maximum intensity projection (MIP) images of the second postcontrast subtraction DCE series and the center slice of the T2w series separately. Support vector machine classifiers were trained on the ResNet features to differentiate between benign and malignant lesions. The benefit of pooling features extracted from multiple levels of the network was examined on DCE MIPs. Three multiparametric methods were investigated, where information from the two sequences was integrated at the image level, feature level, or classifier level. Classification performances were evaluated with five-fold cross-validation using the area under the receiver operating characteristic curve (AUC) as the figure of merit. Using pooled features extracted from multiple layers of the ResNet statistically significantly outperformed only using features extracted from the end of the network (P = .002, 95% CI of ▵AUC: [0.007, 0.029]). The multiparametric classifiers using pooled features yielded AUCImageFusion=0.85±0.01, AUCFeatureFusion=0.87±0.01, and AUCClassifierFusion=0.86±0.01, respectively. The feature fusion method statistically significantly outperformed using DCE alone (P = .01, 95% CI of ▵AUC: [0.004, 0.022]), and all three methods statistically significantly outperformed using T2w alone (P < .001).
The alert did not successfully save. Please try again later.
Qiyuan Hu, Heather M. Whitney, Maryellen L. Giger, "Using ResNet feature extraction in computer-aided diagnosis of breast cancer on 927 lesions imaged with multiparametric MRI," Proc. SPIE 11314, Medical Imaging 2020: Computer-Aided Diagnosis, 1131411 (16 March 2020); https://doi.org/10.1117/12.2548872