Due to the coarse quantization of the block-based discrete cosine transform (DCT) coefficients in prevalent video compression techniques, neighboring blocks might have noncontinuous border effects that are particularly eye catching at low bit rates. The so-called postprocessing scheme is designed to reduce blocking artifacts, and thus improve the subjective quality of the video. Many deblocking methods have been proposed. Those methods can be roughly classified into three categories according to their operating domain, namely, in the spatial domain,1, 2, 3, 4 in the DCT domain,5, 6 and in the wavelet transform domain.7 The algorithms operating in the spatial domain are usually simple but their results are not very satisfactory. The algorithms operating in the DCT or the wavelet domain yield better results, but the transform itself is complex and is not easy for hardware implementation. Many methods utilized prior knowledge of quantization parameters,2, 4, 5, 6 but the deblocking methods without knowledge of quantization parameters are more versatile in practical applications.
In this letter, we propose an adaptive postprocessing algorithm without requiring quantization parameters, which preserves object edges and image details while reducing blocking artifacts significantly. The proposed method is based on simple but effective discrete Hadamard transform (DHT), thus, the computational complexity of the algorithm is quite low. Furthermore, the algorithm exploits some cues of the human visual system (HVS) implicitly, and thus improves the visual quality well.
Figure 1 shows a flowchart of the proposed deblocking algorithm. The algorithm takes the decoded sequences as input. To preserve the object edges, we use an edge detection module to acquire the edge information. We then calculate the local activity of each block employing a DHT. The local activity is used to adaptively control the size of a low-pass filter in the DHT domain. Finally, we perform inverse discrete Hadamard transform (IDHT) to acquire output.
Adaptive Edge Detection
Adaptive edge detection consists of two steps, direct current (DC) image generation and edge detection of the DC image. First, the input frame is divided into nonoverlapping blocks. The mean value of every block is calculated and the DC image is formed as the 2-D array of the mean values. Second, the Sobel operator is employed to differentiate the edges and the monotone areas in the DC image. The edge pixel in the DC image is then identified with an adaptive threshold6 given byis the pixel in the DC image; and are the numbers of rows and columns in the DC image, which equals the number of rows and columns in the original frame; and is the Sobel edge detector. If , then pixel is an edge pixel in the DC image that corresponds to a block in the original frame which would not be filtered.
Here we adopt a sequence-ordered Hadamard matrix:
The DHT is computed exactly in integer arithmetic, thus avoiding the inverse transform mismatch problems of DCT and minimizing computational complexity significantly. The IDHT matrix is identical to Eq. 2, so the transform and inverse transform module are reusable when implemented in hardware.
If we let be the DHT coefficients of the block with top-left point , the value of activity can be calculated as, , should increase rapidly with high-frequency components. In this implementation, we set by compromising the precision of activity estimation and the convenience of hardware implementation. A block with a large activity value corresponds to the coarse area or edges where the blocking artifacts might be masked and not visually detectable. A block with a small activity value stands for a smooth region. Since the DC coefficient is proportional to the local mean luminance of a block, the normalization by the DC coefficient implicitly exploits the local luminance adaptation in line with Weber’s law.8
Motivated by the fact that the blocking artifacts in smooth regions are more eye-catching, while preserving image details and object edges, the adaptive filter with a variable size window is mathematically formulated asare filtered coefficients in the DHT block; are adaptive filter weights for neighboring DHT blocks; are DHT coefficients of the block with its top-left point at row and column ; and is the sum of weights of and given by8 adapts well to blocks with different activities. For blocks of low activities where the blocking effects appear to be more visible, the filter window is enlarged to remove the blocking artifacts. On the contrary, the blocks with high activities are far less blurred with a small window size.
To avoid overfiltering a block centered at in texture area, its neighboring block located at is excluded from the filtering operation if Eq. 9 is satisfied.is set to be 0.1 empirically in this paper.
The proposed algorithm was applied to video sequences compressed by the SVC codec from Microsoft Research Asia that could cover all testing points of core experiments 1 (Ref. 9). The Microsoft SVC coding scheme is based on block-based motion-compensated temporal filtering followed by 2-D spatial wavelet decomposition.
The “ ” is a “Foreman” sequence decoded with an image size of at frame rate of , and a bit rate of is taken as input. To evaluate the performance of the proposed algorithm, three existing methods (see Refs. 1, 3, 7) without using prior knowledge of quantization parameters are compared. Since some reference methods are designed for postprocessing of images, for the fairness of comparison, only components are used for comparison. Their postprocessed images are given in Fig. 2b, 2c, 2d. From Fig. 2, it is evident that our proposed method is able to outperform the compared methods by removing blocking artifacts effectively while retaining edge sharpness. It validates the adaptive filtering process in the DHT domain.
In the preceding experiments for one frame, our method on average takes and methods in Refs. 1, 3 all take no more than , while the wavelet-based method in Ref. 7 consumes . We chose the video with image size of as input to validate the simplicity of DHT. The method adopting DCT in Ref. 5 takes about for a frame, while our method on average takes no more than . All experiments were run on a Pentium PC. Of course, the evaluated algorithms were not optimized for real-time applications. Thus, the data of computational complexity given here shows only that the proposed method may be closer to practical applications from the viewpoint of hardware simplicity.
Table 1 gives the peak SNR (PSNR)- results comparing the objective quality. Although we see that PSNR is not a good measure to evaluate such techniques, our proposed approach achieves higher PSNR gain than the method in Ref. 7.
PSNR- Y comparison in decibels.
|“Foreman” Frame||Decoded Video||Pixel1||Wavelet7||H.263 (Ref. 3)||Proposed|
Because human eyes are the final judges of video, we made a subjective test of some deblocking results according to double stimulus continuous quality scale method suggested by ITU-R BT.500-10 (Ref. 10). The mean opinion scores (MOS) were rescaled to a range of 0 to 100. The difference mean opinion scores (DMOS) were calculated as the difference between the original video and the test video. The DMOS of the method in Ref. 3 and the proposed method are compared in Table 2, which shows that the subjective rating of the proposed method is significantly better.
|Sequence||Decoded Video||H.263 (Ref. 3)||Proposed|
A postprocessing algorithm for blocking artifact removal was proposed in the DHT domain. The algorithm can remove blocking artifacts effectively while preserving image details and object edges well. It is a versatile method that does not require prior knowledge of quantization parameters and features low computational complexity. Since the basic operation unit in our method is a block and DHT is inherently simple and computationally efficient, the algorithm is easy for hardware implementation and promising for real-time video postprocessing utilized in handheld devices.
This work was supported by National Natural Science Foundation of China under Grant No. 60502034 and the Shanghai Rising-Star Program under Grant No. 05QMX1435.