The peak signal-to-noise ratio (PSNR) is one of the most popular video quality metrics. This ratio is computed using both original and processed images. We propose a new method to estimate the PSNR from an encoded bit stream without using original video sequences. In the proposed method, the transform coefficients of images or video frames are modeled by a generalized Gaussian distribution. By utilizing the model parameters of this distribution, the PSNR can be estimated. We also propose a fast method that can be used to estimate the model parameters of the original transform coefficient distribution using quantized transform coefficients as well as quantization information extracted from encoded bit streams. Experimental results with H.264 bit streams show that the proposed generalized Gaussian modeling method delivers better performance compared to the standard Laplacian modeling method when estimating the PSNR. The proposed method can be applied to image or video streams compressed with standard coding algorithms, such as MPEG-1, 2, 4, H.264, and JPEG. The proposed method can also be used for image or video quality monitoring systems on the receiver's side.
In this paper, we present comparison of three subjective testing methods: the double stimulus continuous quality scale (DSCQS) method, the single stimulus continuous quality evaluation (SSCQE) method and the absolute category rating (ACR) method. The DSCQS method was used for validate objective models in the VQEG Phase II FRTV test. The SSCQE method is chosen to be used in the VQEG RRTV test. The ACR method is chosen to be used in the VQEG Multimedia test. Since a different subjective test method is used in each test, analyses of the three methods will provide helpful information in understanding human perception of video quality.
We propose a new method for an objective measurement of video quality. By analyzing subjective scores of various video sequences, we find that the human visual system is particularly sensitive to degradation around edges. In other words, when edge areas of a video sequence are degraded, evaluators tend to give low quality scores to the video, even though the overall mean squared error is not large. Based on this observation, we propose an objective video quality measurement method based on degradation around edges. In the proposed method, we first apply an edge detection algorithm to videos and locate edge areas. Then, we measure degradation of those edge areas by computing mean squared errors and use it as a video quality metric after some postprocessing. Experiments show that the proposed method significantly outperforms the conventional peak signal-to-noise ratio (PSNR). This method was also independently evaluated by independent laboratory groups in the Video Quality Experts Group (VQEG) Phase 2 test. The method consistently provided good performances. As a result, the method was included in international recommendations for objective video quality measurement.
In this paper, we investigate video quality on various LCD monitors. There exists a large variance in video quality among various LCD monitors. Due to this unavoidable variance in LCD monitors, there has been a concern about the stability and repeatability of subjective and objective testing for LCD monitors. We performed subjective testing the DSCQS method and compare subjective quality ratings on the 5 LCD monitors. The experimental results show that the correlation coefficients among DMOS’s with respect to each monitor are acceptably high. Thus, it may be possible to develop models for objective measurement of video quality on LCD monitors. In the paper, physical parameters such as color temperature, contrast, brightness, response time, etc will be presented and thorough analyses will be provided.
In this paper, we propose a new method for an objective measurement of video quality based on edge degradation. One of the most important requirements for an objective method for video quality measurement is that it should provide consistent performances over a wide range of video sequences that are not used in the designing stage. By analyzing subjective scores of various video sequences, we found that the human visual system is sensitive to degradation around edges. In other words, when edge areas of a video are blurred, evaluators tend to give low scores to the video even though the overall mean squared error is not so large. Based on this observation, we propose an objective video quality measurement method that measures degradation around edges. In the proposed method, we first apply an edge detection algorithm to videos and find edge areas. Then, we measure degradation of those edge areas by computing mean squared error. From this mean squared error, we compute the PSNR and use it as video quality metric. Experimental results show that the proposed method compares favorably with the current objective methods for video quality measurement. Furthermore, when the proposed method is applied to test video sequences that are not used in the designing stage, it still consistently provides satisfactory performances.
In this paper, we propose a fast and efficient registration method that can be used for full reference objective video quality assessment. Instead of using full videos to register source video sequences and processed video sequences, we propose to use a number of reference frames which are selected under a certain criterion and select a number of sub-regions from the reference frames which have large variances. The wavelet transform is used to find the sub-regions. Since the registration is performed using the sub-regions, it can be fast. Experiments show promising results.
In this paper, we investigate the performance of an objective video quality assessment method using the wavelet transform for a large data set. The objective video quality assessment utilizes the wavelet transform, which is applied to each frame of source and processed videos in order to compute spatial frequency components. Then, the difference (squared error) of the wavelet coefficients in each subband is computed and summed. By repeating this procedure to the entire frames of a video, a sequence of difference vectors and the average vector are obtained. Each component of the average vector represents a difference in a certain spatial frequency. In order to take into account the temporal frequencies, a modified 3-D wavelet transform can be applied. Although this evaluation method provides a good performance for training data, its performance for new test videos remains to be seen due to a large number of parameters. In this paper, we apply the method to a large video data set and analyze the performance.