<p>DICOM header information is frequently used to classify medical image types; however, if a header is missing fields or contains incorrect data, the utility is limited. To expedite image classification, we trained convolutional neural networks (CNNs) in two classification tasks for thoracic radiographic views obtained from dual-energy studies: (a) distinguishing between frontal, lateral, soft tissue, and bone images and (b) distinguishing between posteroanterior (PA) or anteroposterior (AP) chest radiographs. CNNs with AlexNet architecture were trained from scratch. 1910 manually classified radiographs were used for training the network to accomplish task (a), then tested with an independent test set (3757 images). Frontal radiographs from the two datasets were combined to train a network to accomplish task (b); tested using an independent test set of 1000 radiographs. ROC analysis was performed for each trained CNN with area under the curve (AUC) as a performance metric. Classification between frontal images (AP/PA) and other image types yielded an AUC of 0.997 [95% confidence interval (CI): 0.996, 0.998]. Classification between PA and AP radiographs resulted in an AUC of 0.973 (95% CI: 0.961, 0.981). CNNs were able to rapidly classify thoracic radiographs with high accuracy, thus potentially contributing to effective and efficient workflow.</p>
Deep learning can be used to classify images to verify or correct DICOM header information. One situation where this is useful is in the classification of thoracic radiographs that were acquired anteroposteriorly (AP) or posteroanteriorly (PA). A convolutional neural network (CNN) was previously trained and showed a strong performance in the task of classifying between AP and PA radiographs, giving a 0.97 ± 0.005 AUC for an independent test set. However, 81% of the AP training set and 24% of the AP independent test set consisted of images with imprinted labels. To evaluate the effect of labels on training and testing of a CNN, the labels on the images used for training were removed by cropping. Then the CNN was retrained using the cropped images with the same training parameters as before. The retrained CNN was tested on the same independent test set and resulted in a 0.95 ± 0.007 AUC in the task of classifying between AP and PA radiographs. The p-value is 0.002 between the AUCs from the two networks, showing a statistically significant decrease in performance by the network trained on the cropped images. The decrease in performance may be due to the network being previously trained to recognize imprinted labels or due to relevant anatomy being cropped along with the label, however, the performance is still high and can be incorporated in clinical workflow.