Wyner-Ziv video coding has gained considerable interests in the research community. In this paper, we examine the Wyner-Ziv video coding performance and compare it with conventional motion-compensated prediction (MCP) based video coding. Theoretical and simulation results show that although Wyner-Ziv video coding can
achieve as much as 6dB gain over conventional video coding without motion search, it still falls 6dB or more behind current best MCP-based INTER-frame video coding. We further investigate the use of sub-pixel and multi-reference motion search methods to improve Wyner-Ziv video coding efficiency.
The coding efficiency of a Wyner-Ziv video codec relies significantly on the quality of side information extracted at the decoder. The construction of efficient side information is difficult thanks in part to the fact that the original video sequence is not available at the decoder. Conventional motion search methods are widely used in the Wyner-Ziv video decoder to extract side information. This substantially increases the Wyner-Ziv video decoding complexity. In this paper, we propose a new method to construct side estimation based on the idea of universal prediction. This method, referred to as Wyner-Ziv video coding with Universal Prediction(WZUP), does not perform motion search or assume underlying model of original input video sequences at the decoder. Instead, WZUP estimates the side
information based on its observations on past reconstructed video
data. We show that WZUP can significantly reduce decoding complexity at the decoder and achieve fair side estimation performance, thus make it possible to design both the video encoder and the decoder with low computational complexity.
In this paper we present a statistical analysis of motion prediction with drift in video coding. The drift effect occurs when the video decoder does not have access to the same reference information used in the encoder in a hybrid video codec using motion prediction.
Although the drift effect has been known to the video research community for a long time, there has not been a systematic theoretical treatment of this mechanism. Generally the performance of motion prediction with drift is evaluated experimentally. In this paper we derive a closed-form expression for the drift error. Based on this result, we derive an efficient rate distortion optimization given the statistical knowledge of the channel.
This paper investigates the motion prediction techniques used in hybrid video coding. We first present a unified interpretation of motion prediction in terms of the prediction of motion threads. It is demonstrated that most current motion prediction techniques can be regarded as linear predictors of motion threads. Based on this new interpretation of motion prediction, we discuss the optimal motion predictor in the framework of Markov universal prediction. We define
Markov predictability in a way that it upper bounds the optimal prediction performance in perfect reconstruction scenario. Since most current video applications use lossy coding, this results in imperfect reconstructions of the motion threads used in prediction. However, the optimality with the above perfect reconstruction scenario still holds in this case in an almost sure sense.
The Block-based Discrete Cosine Transform (BDCT) is one of the most widely used transforms in image and video coding. However, it introduces annoying blocking artifact at low data rates. A great deal of work has been done to reduce the artifact. In this paper, we propose a transform domain-based Markov Random Field (TD-MRF) model to address this problem. Based on this new model, a transform domain maximum a posteriori (MAP) algorithm is presented to remove the blocking artifacts in images and video. It is shown that our new approach can reduce the computational complexity dramatically while achieving significant visual improvements.