In the context of stereo video, disparity-coherent watermarking has been introduced to provide superior robustness against virtual view synthesis, as well as to improve perceived fidelity. Still, a number of practical considerations have been overlooked and in particular the role of the underlying depth estimation tool on performances. In this article, we explore the interplay between various stereo video processing primitives and highlight a few take away lessons that should be accounted for to improve performances of future disparity-coherent watermarking systems. In particular, we highlight how lost correspondences during the stereo warping process impact watermark detection, thereby calling for innovative designs.
Stereo video content calls for new watermarking strategies, e.g. to achieve robustness against virtual view synthesis. Prior works focused either on inserting the watermark in an invariant domain or on guaranteeing that the watermarks introduced in the left and right views are coherent with the disparity of the scene. However, the first approach raises fidelity issues while the second requires side information at detection i.e. the detector is not blind. In this paper, we propose a new blind detection procedure for disparity-coherent watermarks. In a nutshell, the detector relies on cross-correlation to aggregate the scattered pieces of the embedded reference watermark pattern rather than warping the reference pattern according to the parameters of the current view prior to detection. Reported experimental results indicate that this revisited detector successfully manages to retrieve embedded watermarks even after lossy compression.
A number of technologies claim to be robust against content re-acquisition with a camera recorder e.g. water- marking and content ngerprinting. However, the benchmarking campaigns required to evaluate the impact of the camcorder path are tedious and such evaluation is routinely overlooked in practice. Due to the interaction between numerous devices, camcording displayed content modi es the video essence in various ways, including geometric distortions, temporal transforms, non-uniform and varying luminance transformations, saturation, color alteration, etc. It is necessary to clearly understand the di erent phenomena at stake in order to design ef- cient countermeasures or to build accurate simulators which mimic these e ects. As a rst step in this direction, we focus in this study solely on luminance transforms. In particular, we investigate three di erent alterations, namely: (i) the spatial non uniformity, (ii) the steady state luminance response, and (iii) the transient luminance response.