We proposed a model framework which was inspired by generative adversarial network for video conversion. Our goal is that two different target videos can synchronize the movements (such as the head displacement and facial movements of the person), and the movements were not existed in the original video. Our key observation is that a video prediction model is added to the original framework of the generative adversarial network, so that the generated video can get the time sequence characteristics of the target video to improve the action consistency and time synchronization stability. In the training process, we obtained and aligned the spatial position of the action in video through landmark points detection, to ensure that the generated samples would not appear the phenomenon of spatial dislocation. In the training process, we will generate sample t and obtain sample t + 1 through pre-trained time predictor, calculating the generate sample loss feedback pre-trained generative model. Using this framework, we can: (1) obtain more convenient to make available training samples and improve the available range of the model; (2) improve the accuracy of target generates video.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.