We are developing automated analysis of corneal-endothelial-cell-layer, specular microscopic images so as to determine quantitative biomarkers indicative of corneal health following corneal transplantation. Especially on these images of varying quality, commercial automated image analysis systems can give inaccurate results, and manual methods are very labor intensive. We have developed a method to automatically segment endothelial cells with a process that included image flattening, U-Net deep learning, and postprocessing to create individual cell segmentations. We used 130 corneal endothelial cell images following one type of corneal transplantation (Descemet stripping automated endothelial keratoplasty) with expert-reader annotated cell borders. We obtained very good pixelwise segmentation performance (e.g., Dice coefficient = 0.87 ± 0.17, Jaccard index = 0.80 ± 0.18, across 10 folds). The automated method segmented cells left unmarked by analysts and sometimes segmented cells differently than analysts (e.g., one cell was split or two cells were merged). A clinically informative visual analysis of the held-out test set showed that 92% of cells within manually labeled regions were acceptably segmented and that, as compared to manual segmentation, automation added 21% more correctly segmented cells. We speculate that automation could reduce 15 to 30 min of manual segmentation to 3 to 5 min of manual review and editing.
Images of the endothelial cell layer of the cornea can be used to evaluate corneal health. Quantitative biomarkers extracted from these images such as cell density, coefficient of variation of cell area, and cell hexagonality are commonly used to evaluate the status of the endothelium. Currently, fully-automated endothelial image analysis systems in use often give inaccurate results, while semi-automated methods, requiring trained image analysis readers to identify cells manually, are both challenging and time-consuming. We are investigating two deep learning methods to automatically segment cells in such images. We compare the performance of two deep neural networks, namely U-Net and SegNet. To train and test the classifiers, a dataset of 130 images was collected, with expert reader annotated cell borders in each image. We applied standard training and testing techniques to evaluate pixel-wise segmentation performance, and report corresponding metrics such as the Dice and Jaccard coefficients. Visual evaluation of results showed that most pixel-wise errors in the U-Net were rather non-consequential. Results from the U-Net approach are being applied to create endothelial cell segmentations and quantify important morphological measurements for evaluating cornea health.