The purpose of this study was to develop and evaluate the performance of a convolutional neural network (CNN) that uses a novel A-line based classification approach to detect cancer in OCT images of breast specimens. Deep learning algorithms have been developed for OCT ophthalmology applications using pixel-based classification approaches. In this study, a novel deep learning approach was developed that classifies OCT A-lines of breast tissue.
De-identified human breast tissues from mastectomy and breast reduction specimens were excised from patients at Columbia University Medical Center. A total of 82 specimens from 49 patients were imaged with OCT, including both normal tissues and non-neoplastic tissues.
The proposed algorithm utilized a hybrid 2D/1D convolutional neural network (CNN) to map each single B-scan to a 1D label vector, which were derived from manual annotation. Each A-line was labelled as one of the following tissue types: ductal carcinoma in situ (DCIS), invasive ductal carcinoma (IDC), adipose, and stroma.
Five-fold cross-validation Dice scores across tissue types were: 0.82-0.95 for IDC, 0.54-0.75 for DCIS, 0.67-0.91 for adipose, and 0.61-0.86 for stroma. In a second experiment, IDC and DCIS were combined as a single tissue class (malignancy) while stroma and adipose were combined as a second tissue class (non-malignancy). In this setup, the experiment yielded five-fold cross-validation Dice scores between 0.89-0.93, respectively.
Future work includes acquiring more patient samples and to compare the algorithm to previous works, including both deep learning and traditional automatic image processing methods for classification of breast tissue in OCT images.