In computational imaging by digital holography, lateral resolution of retinal images is limited to about 20 microns by the aberrations of the eye. To overcome this limitation, the aberrations have to be canceled. Digital aberration compensation can be performed by post-processing of full-field digital holograms. Aberration compensation was demonstrated from wavefront measurement by reconstruction of digital holograms in subapertures [Kumar, A. et al. Optics express 21.9 (2013): 10850-10866.], and by measurement of a guide star hologram [Liu, C. et al. Applied optics 52.12 (2013): 2940-2949.]. Yet, these wavefront measurement methods have limited accuracy in practice. For holographic tomography of the human retina, image reconstruction was demonstrated by iterative digital aberration compensation, by minimization of the local entropy of speckle-averaged tomographic volumes [Hillmann, D. et al. Scientific reports 6 (2016): 35209.]. However image-based aberration compensation is time-consuming, preventing real-time image rendering. We are investigating a new digital aberration compensation scheme with a deep neural network to circumvent the limitations of these aberration correction methods. To train the network, 28.000 anonymized images of eye fundus from patients of the 15-20 hospital in Paris have been collected, and synthetic interferograms have been reconstructed digitally by simulating the propagation of eye fundus images recorded with standard cameras. With a U-Net architecture [Ronneberger, O. et al. International Conference on Medical image computing and computer-assisted intervention. Springer, Cham (2015): 234-241.], we demonstrate defocus correction of these complex-valued synthetic interferograms. Other aberration orders will be corrected with the same method, to improve lateral resolution up to the diffraction limit in digital holographic imaging of the retina.