Defects in retinal pigment epithelial (RPE) cells, which nourish retinal neurosensory photoreceptor cells, contribute to many blinding diseases. Recently, the combination of adaptive optics (AO) imaging with indocyanine green (ICG) has enabled the visualization of RPE cells directly in patients’ eyes, which makes it possible to monitor cellular status in real time. However, RPE cells visualized using AO-ICG have ambiguous boundaries and minimal intracellular contrast, making it difficult for computer algorithms to identify cells solely based on image appearance information. Here, we demonstrate the importance of providing spatial information for deep learning networks. We used a training dataset containing 1,633 AO images and a separate dataset containing 250 images for validation. Whereas the original LinkNet was unable to reliably identify low-contrast RPE cells, we found that our proposed spatially-aware LinkNet which has direct access to additional spatial information about the hexagonal arrangement of RPE cells (auxiliary spatial constraints) achieved better results. The overall precision, recall, and F1 score from the spatially aware deep learning method were 92.1±4.3%, 88.2±5.7%, and 90.0±3.8% (mean±SD) respectively, which was significantly better than the original LinkNet with 92.0±4.6%, 57.9±13.3%, and 70.0±10.6 (p<0.05). These experimental results demonstrate that the auxiliary spatial constraints are the key factor for improving RPE identification accuracy. Explicit incorporation of spatial constraints into existing deep learning networks may be useful for handling images with known spatial constraints and low image intensity information at cell boundaries.