Exponential increase in the demand for high-quality user-generated content (UGC) videos and limited bandwidth pose great challenges for hosting platforms in practice. How to optimize the compression of UGC videos efficiently becomes critical. As the ultimate receiver is human visual system, there is a growing consensus that the optimization of the video coding and processing shall be fully driven by the perceptual quality, so traditional rate control-based methods may not be optimal. In this paper, a novel perceptual model on compressed UGC video quality is proposed by exploiting characteristics extracted from only source video. In the proposed method, content-aware features and quality-aware features are explored to estimate quality curves against quantization parameter (QP) variations. Specifically, content revelant deep semantic features from pre-trained image classification neural networks and quality revelant handcrafted features from various objective video quality assessment (VQA) models are utilized. Finally, a machine-learning approach is proposed to predict the quality of compressed videos of different QP values. Hence, the quality curves can be driven, by estimating the QP for given target quality, a quality-centered compression paradigm can be built. Based on experimental results, the proposed method can accurately model quality curves for various UGC videos and control compression quality well.
|