Image denoising manages to recover a digital image from its noisy version by exploring the statistical features inside a
given noisy image. Most denoising methods perform well at low noise levels but lose efficiency at higher ones. In this
paper, we propose a novel image denoising method, which restores an image by exploiting the correlations between the
noisy image and the images retrieved from the cloud. Given a noisy image, we first retrieve relevant images based on
feature-level similarity. These images are then geometrically aligned to the noisy image to increase global statistical
correlation. Using the aligned images as references, we propose recovering the image with patch-level noise removal.
For each noisy patch, we first retrieve similar patches from the references and stack these patches (including the noisy
one) into a three dimensional (3D) group. We then obtain the noise free (NF) patches by collaborative filtering over the
3D groups. These recovered NF patches are aggregated together, producing the desired NF image. Experimental results
demonstrate that our scheme achieves significantly better results compared to state-of-the-art methods in terms of both
objective and subjective qualities.
Inter prediction is an important component for video coding which exploits the temporal correlation between frames and
significantly reduces the redundancy in video sequences. In this paper, we propose a predictive patch matching for inter
prediction based on template matching prediction. Besides the surrounding reconstructed pixels which form the template
in template matching prediction, our proposed patch matching is able to utilize the predicted pixels generated by the
traditional motion prediction. A linear combination of the reconstructed template and the predicted pixels permits to
synthesize a prediction while maintaining the local variations of the target block. Furthermore, a mode selection
mechanism is introduced to adaptively select the predictive patch matching at sub-block level. Experimental results
demonstrate the effectiveness of our proposed predictive patch matching. Constant coding gain can be achieved by our
scheme at both low and high bit rates
The discrete wavelet transform (DWT) has been widely used in scalable video coding for its advantages in multi-resolution analysis and subband decomposition. In this paper, a spatially scalable video coding system based on H.264 coding method and in-band overcomplete discrete wavelet transform (ODWT) technique is proposed, which integrates the good compression performance of H.264 in low frequency domain with the efficient motion estimation of in- band ODWT in wavelet domain. Intra prediction, coefficients scan manner and inter prediction are improved to overcome the inefficiency of H.264 coding in high frequency subbands caused by different pixels distribution properties. Through series of subband analysis and statistical data retrieval for the three high frequency decompositions, intra prediction directions are optimized and three subsets of prediction mode are presented for the three high subbands
respectively. They save over 30% bits for intra mode with similar performance. Moreover, novel zigzag scan tables are proposed to improve the coding efficiency by utilizing the oriented frequency characteristics of each high band. To inter prediction, an adaptive motion estimation method is proposed in which the motion information of low band is adaptively and effectively utilized to achieve much more accurate motion vector and more efficient motion compensation in high bands coding. Experimental results show that, all of the proposed methods endue the spatially scalable video coding system with over 0.4 dB gain in PSNR and 10.4% in rate reduction.