Video inpainting is a very challenging task. Directly using the image inpainting method to repair the damaged video leads to the inter-frame contents flicker due to temporal discontinuities. In this paper, we introduce spatial structure and temporal edge information guided video inpainting model to repair the missing regions in high-resolution video. The model uses a convolutional neural network with residual blocks to fix up the missing contents in intra-frame according to spatial structure. At the same time, temporal edge of reference frame is introduced in the temporal domain, which has a large guiding effect on improving the texture and reducing the inter-frame flicker. We train the model with regular and irregular masks on the YouTube high resolution video datasets, and the trained model is qualitatively and quantitatively evaluated on the test set, and the results show our method is superior to the previous methods.
Based on the original exemplar-based Criminisi algorithm, we proposed two points to improve the result of image inpainting. First, in order to solve the problem that the searched matching block existing in the optimal block search is not optimal, this paper proposes a fusion repair strategy. The first n blocks are selected as matching blocks in the search of the optimal block, and then the weighted averages are performed on the matching blocks, which is used as the target block to be repaired. Second, considering the size of the block to be repaired, a layered repair strategy is adopted. An image to be repaired is first downsampled to obtain images at different scales, and then repaired from the topmost image. The experimental results show that the proposed algorithm improves the quality of the repair subjectively and objectively.