Architectural distortion (AD) is one of the most important potentially ominous signs of breast cancer. As a 3D imaging, digital breast tomosynthesis (DBT) is an accurate tool to detect AD. We developed a deep learning approach for AD detection guided by mammary gland spatial pattern (MGSP) in DBT. The approach consists of two stages: 2D detection and 3D aggregation. In 2D detection, prior MGSP information is obtained first. It includes 1) magnitude image and orientation field map produced from Gabor filters and 2) mammary gland convergence map. Second, Faster-RCNN detection network is employed. Region proposal network extracts features and determines locations of AD candidates and the soft classifier is used for reducing false positives. In 3D aggregation, a region fusion strategy is designed to fuse 2D candidates into 3D candidates. For evaluation, 265 DBT volumes (138 with ADs and 127 without any lesion) were collected from 68 patients. Free response receiver operating characteristic curve was obtained and the mean true positive fraction (MTPF) was used as the figure-of-merit of model performance. Compared with a baseline model based on convergence measure, the six-fold cross validation results showed that our proposed approach achieved MTPF of 0.50 ± 0.04, while the baseline achieved 0.37 ± 0.03. The improvement of our approach was statistically significant (p≪0.001).
We are developing a U-Net based deep learning (U-DL) model for bladder segmentation in CT urography (CTU) as a part of a computer-assisted bladder cancer detection and treatment response assessment pipeline. We previously developed a bladder segmentation method that used a deep-learning convolution neural network and level sets (DCNNLS) within a user-input bounding box. The new method does not require a user-input box nor the level sets for postprocessing. To identify the best model for this task, we compared a number of U-DL models: 1) 2D CTU slices or 3D volume as input, 2) different image resolutions, and 3) preprocessing with and without automated cropping on each slice. We evaluated the segmentation performance of the different U-DL models using 3D hand-segmented contours as reference standard. The segmentation accuracy was quantified by the average volume intersection ratio (AVI), average percent volume error (AVE), average absolute volume error (AAVE), average minimum distance (AMD), and the Jaccard index (JI) for a data set of 81 training/validation and 92 independent test cases. For the test set, the best 2D UDL model achieved AVI, AVE, AAVE, AMD, and JI values of 93.4±9.5%, -4.2±14.2%, 9.2±11.5%, 2.7±2.5 mm, 85.0±11.3%, respectively, while the best 3D U-DL achieved 90.6±11.9%, -2.3±21.7%, 11.5±18.5%, 3.1±3.2 mm, and 82.6±14.2%, respectively. For comparison, the corresponding values obtained with our previous DCNN-LS method were 81.9±12.1%, 10.2±16.2%, 14.0±13.0%, 3.6±2.0 mm, and 76.2±11.8%, respectively, for the same test set. The UDL model provided highly accurate bladder segmentation and was more automated than the previous approach.
Breast density is one of the strongest risk factors for breast cancer. Our purpose of this study is to develop a deep learning model for BI-RADS density classification on digital mammograms (DM). With IRB approval, 2581 DMs were retrospectively collected from 672 women in our institution. We designed a multi-path DCNN (MP-DCNN) to classify each DM into one of four BI-RADS density categories. The MP-DCNN has four inputs: (1) subsampled DM (800 μm pixel spacing), (2) a mask of dense area (MDA) obtained with a U-net (800 μm pixel spacing), (3) the largest square region of interest (ROI) within mammographic breast (100 μm pixel spacing), and (4) automated percentage of breast density (PD). As the baseline statistic, a single path DCNN with subsampled DM (800 um pixel spacing) as input was used. An experienced Mammography Quality Standards Act (MQSA) radiologist provided BI-RADS density category and PD by interactive thresholding as the reference standards. With ten-fold cross-validation, the BI-RADS categories by MP-DCNN for 2068 of the 2581 cases agreed with radiologist’s assessment (accuracy = 80.7%, weighted kappa = 0.83) and the accuracy reached 89.0% if the breasts were categorized as non-dense (BI-RADS A & B) and dense (BIRADS C & D). For comparison, a single path DCNN as the baseline model obtained agreement in 1906 of the 2581 cases (accuracy = 73.8%, weighted kappa = 0.75). The improvement in BI-RADS classification from the baseline to the MP-DCNN was statistically significant (p<0.001).
Accurate segmentation of breast region is an essential step for quantitative analysis of breast parenchyma on mammograms. Pectoral muscle identification on mediolateral oblique (MLO) view mammograms remains a challenging problem. In this study, our purpose is to develop a supervised deep learning approach for automated identification of the pectoral muscle on MLO-view mammograms. With IRB approval, 756 MLO-view mammograms including 656 digitized film mammograms (DFM) and 100 full field digital mammograms (DM) were retrospectively collected. The film mammograms were digitized at a pixel size of 50 μm × 50 μm and the DMs were acquired with a GE Senographe system with a pixel size of 100 μm × 100 μm. All mammograms were subsampled to 800 μm × 800 μm before the pectoral muscle analysis. An experienced radiologist manually segmented the pectoral muscle boundary as the reference standard. We constructed a U-Net-like deep convolutional neural network (DCNN) to identify the boundary of the pectoral muscle. The DCNN consisted of a contracting path to capture multi-resolution image context and a symmetric expanding path for prediction of the pectoral muscle region. A total of 15 million parameters in DCNN were trained with a mini-batched gradient decent algorithm by minimizing a binary cross-entropy cost function. Ten-fold crossvalidation was used in training and evaluating the performance of our model. The DCNN-segmented pectoral muscle was compared to the reference standard with three criteria: 1) the percent overlap area (POA), 2) the Hausdorff distance (Hdist) and 3) the average Euclidean distance (AvgDist). We found that the mean POA, the mean Hdist, and the mean AvgDist were 96.0±5.3%, 2.14±1.50 mm, and 0.77± 0.97 mm, respectively. Further study is underway to evaluate its effect on quantitative analysis of mammograms.