Identity documents, such as ID cards, passports, and driver's licenses, contain textual information, a portrait of
the legitimate holder, and eventually some other biometric characteristics such as a fingerprint or handwritten
signature. As prices for digital imaging technologies fall, making them more widely available, we have seen an
exponential increase in the ease and the number of counterfeiters that can effectively forge documents. Today,
with only limited knowledge of technology and a small amount of money, a counterfeiter can effortlessly replace a
photo or modify identity information on a legitimate document to the extent that it is very diffcult to differentiate
from the original.
This paper proposes a virtually fraud-proof ID document based on a combination of three different data
hiding technologies: digital watermarking, 2-D bar codes, and Copy Detection Pattern, plus additional biometric
protection. As will be shown, that combination of data hiding technologies protects the document against any
forgery, in principle without any requirement for other security features. To prevent a genuine document to be
used by an illegitimate user,biometric information is also covertly stored in the ID document, to be used for
identification at the detector.
Technologies for making high-quality copies of documents are getting more available, cheaper, and more efficient. As a result, the counterfeiting business engenders huge losses, ranging to 5% to 8% of worldwide sales of brand products, and endangers the reputation and value of the brands themselves. Moreover, the growth of the Internet drives the business of counterfeited documents (fake IDs, university diplomas, checks, and so on), which can be bought easily and anonymously from hundreds of companies on the Web. The incredible progress of digital imaging equipment has put in question the very possibility of verifying the authenticity of documents: how can we discern genuine documents from seemingly “perfect” copies? This paper proposes a solution based on creating digital images with specific properties, called a Copy-detection patterns (CDP), that is printed on arbitrary documents, packages, etc. CDPs make an optimal use of an "information loss principle": every time an imae is printed or scanned, some information is lost about the original digital image. That principle applies even for the highest quality scanning, digital imaging, printing or photocopying equipment today, and will likely remain true for tomorrow. By measuring the amount of information contained in a scanned CDP, the CDP detector can take a decision on the authenticity of the document.
Proc. SPIE. 5020, Security and Watermarking of Multimedia Contents V
KEYWORDS: Signal to noise ratio, Sensors, Interference (communication), Distortion, Linear filtering, Digital watermarking, Signal processing, Distance measurement, Electronic filtering, Signal detection
Most watermarking applications require that the embedded watermark be
imperceptible. Accordingly, perceptual masking models that identify
unperceived regions of the signal were adapted in a straightforward
manner to watermarking. The derived mask -- or slack -- is often
interpreted as the maximal allowed distortion within a given region
of the signal; it is used in many watermarking embedding methods to
shape a white spectrum message, in the relevant transform domain
(space, frequency). Such a usage of the mask is intuitively
satisfying since imperceptibility is indeed guaranteed; yet, it
discards any guarantee of robustness to attacks -- another fundamental, necessary property of watermarks. The trade-off between fidelity and robustness has been little addressed so far due in great part to the absence of an accurate measure of perceptual distortion. In this paper we study this trade-off using Watson's measure of perceptual distance between two images as the measure of fidelity. Based on a constrained perceptual distance, the embedder must maximize the watermark's robustness while assuming a knowledgeable attacker will attempt to remove the watermark. Solving this problem leads to an optimized watermark strength for each location of the content.
The two main objectives of this paper are: (1) to better define the public-key (PK) watermarking problem -- in terms of properties, design requirements and usage, and (2) to propose one solution to the problem by using neural networks functions. Our survey of public key watermarking begins with the review of the state of the art. Different aspects of PK watermarking are then discussed, among which: basic robustness properties, usage of PK systems, attacks on the public and secret detectors, types of PK strategies, and strong vs weak PK watermarking systems. Accordingly, a PK system using multi-layers neural networks (NN) functions is proposed to match many PK system requirements. The approach is shortly presented for the linear case. Theoretical results are given, showing that it is possible to design PK systems approaching the detection performances of secret key watermarking-- a very unusual feature of PK systems. Experimental results are given on both simulated signals and image, confirming the predicted results and showing great resistance to JPEG compression. The paper ends with openings for new research directions.
Masking models are mostly used in data compression algorithms and serve to shape the quantization noise. They were introduced in watermarking as to indicate the regions where the watermark could be introduced without perceptible artifacts. This allowed to embed more watermark energy, for a given absolute distortion constraint, than if no mask is used. Yet, little attention has been paid to the consequences of using these masks with respect to detection performance. In this work, it is shown that blind use of masking models facilitates the attacker's role, and eventually results in severe decreases of detection statistic at the detector, even for reasonable attack distortions.