Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.
A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content
adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes
(WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21
part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single
generalised framework by considering a global parameter space, from which the optimum parameters for a specific
algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required
content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated
by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content.
The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based
watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.
In this paper a universal embedding distortion model for wavelet based watermarking is presented. The present
work extends our previous work on modelling embedding distortion for watermarking algorithms that use orthonormal
wavelet kernels to non-orthonormal wavelet kernels, such as biorthogonal wavelets. By using a common
framework for major wavelet based watermarking algorithms and the Parseval's energy conservation theorem
for orthonormal transforms, we propose that the distortion performance, measured using the mean square error
(MSE), is proportional to the sum of energy of wavelet coefficients to be modified by watermark embedding. The
extension of the model to non-orthonormal wavelet kernel is obtained by rescaling the sum of energy of wavelet
coefficients to be modified by watermark embedding using a weighting parameter that follows the energy conservation
theorems in wavelet frames. The proposed model is useful to find optimum input parameters, such as,
the wavelet kernel, coefficient selections and subband choices, for a given wavelet based watermarking algorithm.