Full-waveform LIDAR is able to record the entire backscattered signal of each laser pulse, thus can obtain detailed information of the illuminated surface. In full-waveform LIDAR, system resolution is restricted by source pulse width and a data acquisition device bandwidth. To improve system-ranging resolution, we discuss a temporal super-resolution system with a deep learning network in this paper. In full waveform LiDAR system, When the emitted laser beam contact with different target, each time the emitted laser beam separates into a reflected echo signal and a transmission beam, the transmission beam travels in the same direction as the emitted laser. Until the transmission beam reach the ground, part of it will be absorbed by the ground and the other will become the final echo signal. Each beam transport in a different distance, and the backscattered beam will be collected and digitized by using low bandwidth detectors and A/D convertors. To reconstruct a super-resolution backscatter signal, we designed a deep-learning framework for obtaining LIDAR data with higher resolution. Inspired by the excellent performance of convolutional neural networks (CNN) and residual networks (ResNet) in image classification and image super-resolution. Considering that both image and LIDAR data could be regarded as a binary sequences that a machine could read and process in a manner, we come up with a deep-learning architecture which is specially designed for superresolution full wave-form LIDAR. After adjusting the hyperparameter and training the network, we find that deep-learning method is a feasible and suitable way for super-resolution full-waveform LIDAR.