In our earlier work we have proposed a watermarking algorithm for JPEG/MPEG streams that is based on selectively discarding high frequency DCT coefficients. Like any watermarking algorithm, the performance of our method must be evaluated by the robustness of the watermark, the size of the watermark, and the visual degradation the watermark introduces. These performance factors are controlled by three parameters, namely the maximal coarseness of the quantizer used in re-encoding, the number of DCT blocks used to embed a single watermark bit, and the lowest DCT coefficient that we permit to be discarded. It is possible to determine these parameters experimentally. In this paper, however, we follow a more rigorous approach and develop a statistical model for the watermarking algorithm. Using this model we derive the probability that a label bit cannot be embedded. The resulting model can be used, for instance, for maximizing the robustness against re-encoding and for developing adequate error correcting codes for the label bit string.
Digital watermark is used to protect digital image against any illegal reproduction and tampering. In the selective block assignment process, the image is divided into N X N pixel blocks and each block is Discrete Cosine Transformed (DCT). The set of blocks will be then selectively chosen to encode the copyright message. Each selective block will be incremented by a value, in order to maintain the invisibility of the watermarking image, the incremented value should be within a range. The selection of blocks is based on measurement of the content. Depends on the amount of messages stored and the signal to noise ratio (SNR) of the resultant image required, a threshold is decided. In practice, the threshold will be set such that the duplicated message or an error correction mechanism can also be included in order to increase its robustness. The decoding process should be carried out by using the threshold values to get back the locations that have watermark information. Then the watermarked image is subtracted from the original image to obtain the secret data. Simulation results show that the watermarked image looks visually identical to the original and with an SNR of 44.7 dB for Lenna and with SNR 43 dB for airplane with size 256 X 256 pixels.
Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyright material. One approach to Transform Domain image watermarking is to divide the image into separate blocks and compute the transform of each block. The watermark is inserted in the transform domain and the inverse transform is then computed. Such an approach is particularly effective against JPEG compression where 8 X 8 blocks are used in conjunction with the DCT. Using small blocks allows the watermark to be embedded adaptively as a function of the luminance and texture. However for small block sizes blocking artifacts are observed when the strength of the watermark is increased. In order to circumvent this problem, we propose a new approach based on Lapped Orthogonal Transforms (LOT) in which the watermark is inserted adaptively into the LOT domain. Robustness of the watermark to operations such as lossy compression is achieved by using a spread spectrum signal which is added in the LOT domain. The keys used to embed the spread spectrum signal are generated, certified, authenticated and securely distributed using a public key infrastructure containing an electronic copyright office and a certification authority. In addition to the above we propose using an invisible template to reverse the effects of rotation, rescaling and cropping on a watermarked image. This separate invisible template is based on the properties of the Fourier Transform. Finally, we objectively evaluate the performance of the proposed algorithm in order to demonstrate the robustness of the proposed technique with respect to a number of common image processing including JPEG compression, rotation, scaling and cropping.
The growth of the Internet and the diffusion of multimedia applications requires the development of techniques for embedding identification codes into images, in such a way that it can be granted their authenticity and/or protected the copyright. In this paper a novel system for image watermarking, which exploits the similarity exhibited by the Digital Wavelet Transform with respect to the models of the Human Visual System, for robustly hiding watermarks is presented. In particular, a model for estimating the sensitivity of the eye to noise, previously proposed for compression applications, is used to adapt the watermark strength to the local content of the image. Experimental results are shown supporting the validity of the approach.
The growth of new imaging technologies has created a need for techniques that can be used for copyright protection of digital images. One approach for copyright protection is to introduce an invisible signal known as a digital watermark in the image. In this paper, we describe digital image watermarking techniques known as perceptually watermarks that are designed to exploit aspects of the human visual system in order to produce a transparent, yet robust watermark.
Video authentication techniques are used to prove the originality of received video content and to detect malicious tampering. Existing authentication techniques protect every single bit of the video content and do not allow any form of manipulation. In real applications, this may not be practical. In several situations, compressed videos need to be further processed to accommodate various application requirements. Examples include bitrate scaling, transcoding, and frame rate conversion. The concept of asking each intermediate processing stage to add authentication codes is flawed in practical cases. In this paper, we extend our prior work on JPEG- surviving image authentication techniques to video. We first discuss issues of authenticating MPEG videos under various transcoding situations, including dynamic rate shaping, requantization, frame type conversion, and re-encoding. Different situations pose different technical challenges in developing robust authentication techniques. In the second part of this paper, we propose a robust video authentication system which accepts some MPEG transcoding processes but is able to detect malicious manipulations. It is based on unique invariant properties of the transcoding processes. Digital signature techniques as well as public key methods are used in our robust video authentication system.
Current 'invisible' watermarking techniques aim at producing watermarked data that suffer no or little quality degradation and perceptually identical to the original versions. The most common utility of a watermarked image is (1) for image viewing and display, and (2) for extracting the embedded watermark in subsequent copy protection applications. The issue is often centered on the robustness of the watermark for detection and extraction. In addition to robustness studies, a fundamental question will center on the utilization value of the watermarked images beyond perceptual quality evaluation. Essentially we have to study how the watermarks inserted affect the subsequent processing and utility of images, and what watermarking schemes we can develop that will cater to these processing tasks. This work focuses on the study of watermarking on images used in automatic personal identification technology based fingerprints. We investigate the effects of watermarking fingerprint images on the recognition and retrieval accuracy using a proposed invisible fragile watermarking technique for image verification applications on a specific fingerprint recognition system. We shall also describe the watermarking scheme, fingerprint recognition and feature extraction techniques used. We believe that watermarking of images will provided value-added protection, as well as copyright notification capability, to the fingerprint data collection processes and subsequent usage.
We propose a watermarking scheme which allows the watermarked image to be authenticated by an authentication agent without revealing to the authentication agent the human-readable content of the image by combining privacy control with watermarking and authentication mechanisms. This watermarking scheme has universal applicability to data sets such as image, video and audio bit streams. The watermark can be made to be imperceptible to humans. Usage of public key cryptography allows the authentication agent to authenticate without the capabilities to watermark an image.
The increasing availability of digitally stored information and the development of new multimedia broadcasting services, has recently motivated research on copyright protection and authentication schemes for these services. Possible solutions range from low-level systems based upon header description associated with the bit-stream (labelling), up to high level, holographically inlayed, non-deletable systems (watermarking). This paper is focused on authentication, using the labeling approach; a generic framework is firstly presented and two specific methods are then proposed for the particular cases of still images and videos. The resistance of both methods to JPEG and MPEG2 compression, as well as its sensitivity to image manipulations, are evaluated.
Spread spectrum has been used as a technique for secure communications for a long time. For still image watermarking, spread spectrum has been the method of choice for some time. In this paper we argue that spread spectrum in the form of CDMA has a more natural application in the watermarking of uncompressed digital video. The reason for this sentiment is that digital video, by virtue of its time-space property, fits the direct sequence spread spectrum more readily. The problem, however, is that conventional CDMA achieves its data-hiding capability by a massive increase in bitrate. Successful implementation of CDMA for video watermarking, therefore, requires a reformulation of the concept. To this end, digital video is modeled as a bitplane stream along the time axis. Using a modified m-sequence, bitplanes of specific order are pseudorandomly marked for watermarking. Then, the desired watermark is mapped to a single bitplane and spread via another 2D m-sequence, not necessarily related to the first one, along a stream parallel to that of the video. The tagged planes are removed and replaced by the spread watermark. We show that the above approach resists noise as well as attacks on destroying synchronization at the watermark detector. Such attacks may include regular and/or random frame removal.
This paper presents a video watermarking technology for broadcast monitoring. The technology has been developed at the Philips Research Laboratories in Eindhoven in the context of the European ESPRIT project VIVA (Visual Identity Verification Auditor). The aim of the VIVA project is to investigate and demonstrate a professional broadcast surveillance system. The key technology in the VIVA project is a new video watermarking technique by the name of JAWS (Just Another Watermarking System). The JAWS system has been developed such that the embedded watermarks (1) are invisible, (2) are robust with respect to all common processing steps in the broadcast transmission chain, (3) have a very low probability of false alarms, (4) have a large payload at high rate, and (5) allow for a low complexity and a real-time detection. In this paper we present the basic ingredients of the JAWS technology. We also briefly discuss the performance of JAWS with respect to the requirements of broadcast monitoring.
This paper proposes a new approach for digital watermarking and secure copyright protection of videos, the principal aim being to discourage illicit copying and distribution of copyrighted material. The method presented here is based on the discrete Fourier transform (DFT) of three dimensional chunks of video scene, in contrast with previous works on video watermarking where each video frame was marked separately, or where only intra-frame or motion compensation parameters were marked in MPEG compressed videos. Two kinds of information are hidden in the video: a watermark and a template. Both are encoded using an owner key to ensure the system security and are embedded in the 3D DFT magnitude of video chunks. The watermark is a copyright information encoded in the form of a spread spectrum signal. The template is a key based grid and is used to detect and invert the effect of frame-rate changes, aspect-ratio modification and rescaling of frames. The template search and matching is performed in the log-log-log map of the 3D DFT magnitude. The performance of the presented technique is evaluated experimentally and compared with a frame-by-frame 2D DFT watermarking approach.
Watermarking techniques have been proved as a good method to protect intellectual copyrights in digital formats. But the simplicity for processing information supplied by digital platforms also offers many chances for eliminating marks embedded in the data due to the wide variety of techniques to modify information in digital formats. This paper analyzes a selection of the most interesting methods for image watermarking in order to test its qualities. The comparison of these watermarking techniques has shown new interesting lines of work. Some changes and extensions to these methods are proposed to increase its robustness against some usual attacks and specific watermark attacks. This works has been realized in order to provide the HYPERMEDIA project with an efficient tool for protecting IPR. The objective of this project is to establish an experimental stage on continuous multimedia material (audiovisuals) handling and delivering in a multimedia service environment, allowing the user to navigate in the hyperspace through database which belong to actors of the service chain and protecting IPR of authors or owners.
Many invisible watermarking schemes can be modeled as the addition of a watermark signal to an image, yielding a watermarked variant perceptually similar to the original image. In this paper, we describe a method by which an attacker, given only an image watermarked with such a technique, can attempt to construct an approximation to the watermark. The technique estimates an embedded signal by taking advantage of the inter-pixel correlation typically found in natural images, thereby allowing for a method of both watermark removal and watermark forgery. Furthermore, the approach is computationally inexpensive, making it suitable for attacks on proposed video watermarking schemes.
Most watermarking methods for images and video have been proposed are based on ideas from spread spectrum radio communications, namely additive embedding of a (signal adaptive or non-adaptive) pseudo-noise watermark pattern, and watermark recovery by correlation. Even methods that are not presented as spread spectrum methods often build on these principles. Recently, some skepticism about the robustness of spread spectrum watermarks has arisen, specifically with the general availability of watermark attack software which claim to render most watermarks undetectable. In fact, spread spectrum watermarks and watermark detectors in their simplest form are vulnerable to a variety of attacks. However, with appropriate modifications to the embedding and extraction methods, spread spectrum methods can be made much more resistant against such attacks. In this paper, we review proposed attacks on spread spectrum watermarks are systematically. Further, modifications for watermark embedding and extraction are presented to avoid and counterattack these attacks. Important ingredients are, for example, to adapt the power spectrum of the watermark to the host signal power spectrum, and to employ an intelligent watermark detector with a block-wise multi-dimensional sliding correlator, which can recover the watermark even in the presence of geometric attacks.
Digital watermarking of multimedia (e.g., audio, images, video, etc.) can be viewed as a communications problem in which the watermark must be transmitted and received through a 'watermark channel.' The watermark channel includes distortions resulting from attacks and may include interference from the original digital data. Most current techniques for embedding digital watermarks in multimedia (e.g., audio, images, video, etc.) are based on spread spectrum (SS), although the connection is not always explicit. Current analyses of the watermarking channel are typically limited to the additive white Gaussian noise (AWGN) channel. However, new channel distortions not considered in classical SS are now possible. This paper describes several such manipulations and focuses on signal re-indexing (i.e., re- ordering of samples). The re-indexing channel is shown to behave like a linear filter on average, and the optimal detector, which has linear complexity, is derived. The channel is studied further for the case of spatial direct-sequence SS watermarks. For the conventional detector, it can yield a probability of bit error (PE) of 0.5, the worst possible case. A linear prefilter or the optimal (maximum-likelihood can be used for re-synchronization and reduction of PE. Both methods are analyzed and compared with experimental results. These results include both synthetic data and standard test images and video.
Speech is one of the typical feature parameters and contains semantic information as well as singular information. So we can use semantic information for speech recognition, and singular information for identity verification. In order to realize secure and user-friendly human interface, both characteristics of speech should be utilized through simple procedures. In our paper, an identity verification method using speech is proposed. The proposed method utilizes CELP parameters which is used in speech coding scheme for mobile communication systems such as PDC in Japan, and can verify him/her only with the coded information. Therefore, following merits are given; (1) a speaker verification function can be easily realized in mobile terminals or networks by adding a little functions. (2) CELP parameters contain characteristics of articulation, so speaker can be verified whatever he/she speaks. We apply the latter merit to a challenge-response type verification named text-indicated speaker verification method which is realized by a text verification function as well as a speaker verification function, and both functions utilize only CELP parameters. In the proposed method, a system indicates texts that a user should speak at a verification process. So strength of security for disguise is increased. The reliability of the proposed method will be discussed with simulation results in this paper.
Nowadays the multimedia technology in distributed environments becomes realistic and the multimedia copyright protection issue becomes more and more important. Various digital watermarking techniques have been proposed in recent years as the methods to protect the copyright of multimedia data. Although, conceptually, these techniques can be easily extended for protecting digital audio data, it is challenging to apply these techniques to MPEG Audio streams because we need to design the watermarking schemes working directly in the compressed data domain. In this paper, we present watermarking methods which will embed the watermark directly into the MPEG audio bit streams rather than going through expensive decoding/encoding process in order to apply watermarking schemes in uncompressed data domain. Among the two presented schemes, one embeds the watermark into the Scale Factors of the MPEG audio streams and another one embeds the watermark into the MPEG encoded samples. Our experimental results show that both methods perform well and the distortion could be controlled at the minimal level. While we use MPEG Audio Layer II streams in our experimental tests, the proposed schemes can be applied to MPEG Audio Layer I and III. Furthermore, by enforcing creation of the watermark through a standard encryption function such as DES, the proposed schemes will be successful in resolving rightful ownership of watermarked MPEG audio.
Two classes of digital watermarks have been developed to protect the copyright ownership of digital images. Robust watermarks are designed to withstand attacks on an image (such as compression or scaling), while fragile watermarks are designed to detect minute changes in an image. Fragile marks can also identify where an image has been altered. This paper compares two fragile watermarks. The first method uses a hash function to obtain a digest of the image. An altered or forged version of the original image is then hashed and the digest is compared to the digest of the original image. If the image has changed the digests will be different. We will describe how images can be hashed so that any changes can be spatially localized. The second method uses the Variable-Watermark Two- Dimensional algorithm (VW2D). The sensitivity to changes is user-specific. Either no changes can be permitted (similar to a hard hash function), or an image can be altered and still be labeled authentic. Latter algorithms are known as semi-fragile watermarks. We will describe the performance of these two techniques and discuss under what circumstances one would use a particular technique.
A methodology for comparing robustness of watermarking techniques is proposed. The techniques are first modified into a standard form to make comparison possible. The watermark strength is adjusted for each technique so that a certain perceptual measure of image distortion based on spatial masking is below a predetermined value. Each watermarking technique is further modified into two versions for embedding watermarks consisting of one and 60-bits, respectively. Finally, each detection algorithm is adjusted so that the probability of false detections is below a specified threshold. A family of typical image distortions is selected and parametrized by a distortion parameter. For the one-bit watermark, the robustness with respect to each image distortion is evaluated by increasing the distortion parameter and registering at which value the watermark bit is lost. The bit error rate is used for evaluating the robustness of the 60-bit watermark. The methodology is explained with two frequency-based spread spectrum techniques. The paper is closed with an attempt to introduce a formal definition of robustness.
Since the early 90s a number of papers on 'robust' digital watermarking systems have been presented but none of them uses the same robustness criteria. This is not practical at all for comparison and slows down progress in this area. To address this issue, we present an evaluation procedure of image watermarking systems. First we identify all necessary parameters for proper benchmarking and investigate how to quantitatively describe the image degradation introduced by the watermarking process. For this, we show the weaknesses of usual image quality measures in the context watermarking and propose a novel measure adapted to the human visual system. Then we show how to efficiently evaluate the watermark performance in such a way that fair comparisons between different methods are possible. The usefulness of three graphs: 'attack vs. visual-quality,' 'bit-error vs. visual quality,' and 'bit-error vs. attack' are investigated. In addition the receiver operating characteristic (ROC) graphs are reviewed and proposed to describe statistical detection behavior of watermarking methods. Finally we review a number of attacks that any system should survive to be really useful and propose a benchmark and a set of different suitable images.
In this paper, benchmarking results of watermarking techniques are presented. The benchmark includes evaluation of the watermark robustness and the subjective visual image quality. Four different algorithms are compared, and exhaustively tested. One goal of these tests is to evaluate the feasibility of a Common Functional Model (CFM) developed in the European Project OCTALIS and determine parameters of this model, such as the length of one watermark. This model solves the problem of image trading over an insecure network, such as Internet, and employs hybrid watermarking. Another goal is to evaluate the resistance of the watermarking techniques when subjected to a set of attacks. Results show that the tested techniques do not have the same behavior and that no tested methods has optimal characteristics. A last conclusion is that, as for the evaluation of compression techniques, clear guidelines are necessary to evaluate and compare watermarking techniques.
The paper investigates the use of image histograms as watermarks. First, the problem of exact histogram specification is addressed and a method for exact histogram specification, consistent with the human perception of brightness, is developed. Next, two watermarking techniques based on exact histogram specification are proposed. The first one directly considers image histograms as watermarks. Thus, a particular histogram is assigned as a watermark and images are further transformed to have exactly the assigned histogram. Since quite large variations in image histogram are not perceived by humans, an unlimited number of invisible watermarks can be defined for which images appear visually non-distorted. Besides, by selecting histograms which are variations of uniform histogram, the transformed images are not only uniquely marked but also enhanced. The second approach conserves, for each image, its original histogram. The watermarking procedure consists of two histogram specification transforms: a transform to the assigned watermark followed by an inverse transform to recover the original histogram. Since image recovery after a histogram specification transform is not exact, the error obtained after the two consecutive transforms is further used to track each watermark.
Watermarking schemes are more and more robust to classical degradations. The NEC system developed by Cox, using both original and marked images, can detect the mark with a JPEG compression ratio of 30. Nevertheless a very simple geometric attack done by the program Stirmark can remove the watermark. Most of the present watermarking schemes only map a mark on the image without geometric reference and therefore are not robust to geometric transformation. We present a scheme based on the modification of a collage map (issued from a fractal code used in fractal compression). We add a mark introducing similarities in the image. The embedding of the mark is done by selection of points of interest supporting blocks on which similarities are hided. This selection is done by the Stephens-Harris detector. The similarity is embedded locally to be robust to cropping. Contrary to many schemes, the reference mark used for the detection comes from the marked image and thus undergoes geometrical distortions. The detection of the mark is done by searching interest blocks and their similarities. It does not use the original image and the robustness is guaranteed by a key. Our first results show that the similarities-based watermarking is quite robust to geometric transformation such as translations, rotations and cropping.
A new watermarking method is presented for still images and video streams. The method is different from nearly all known methods for image watermarking, which are based on adding pseudo-random noise to luminance or color components of the pixels. The new method is based on biasing the geometric locations of salient points in an image. The watermark is formed by a pre-defined dense pixel pattern, such as a collection of lines. So-called 'salient points' in the image are then modified, e.g. by warping or by changing the local luminance pattern around a salient point, such that after watermarking a majority of the new salient points lies on the watermark pattern. This paper describes the details of the new watermarking method and discusses the results of a series of tests performed on watermarked images. The feasibility and robustness of the method are shown.
We describe novel methods of watermarking data using quadratic residues and random numbers. Our methods are fast, generic and improve the security of the watermark in most known watermarking techniques.
Digital watermarks have recently been proposed for the purposes of copy protection and copy deterrence for multimedia content. In copy deterrence, a content owner (seller) inserts a unique watermark into a copy of the content before it is sold to a buyer. If the buyer resells unauthorized copies of the watermarked content, then these copies can be traced to the unlawful reseller (original buyer) using a watermark detection algorithm. One problem with such an approach is that the original buyer whose watermark has been found on unauthorized copies can claim that the unauthorized copy was created or caused (for example, by a security breach) by the original seller. In this paper we propose an interactive buyer-seller protocol for invisible watermarking in which the seller does not get to know the exact watermarked copy that the buyer receives. Hence the seller cannot create copies of the original content containing the buyer's watermark. In cases where the seller finds an unauthorized copy, the seller can identify the buyer from a watermark in the unauthorized copy, and furthermore the seller can prove this fact to a third party using a dispute resolution protocol. This prevents the buyer from claiming that an unauthorized copy may have originated from the seller.
Cartoon/map images are synthetic graphics without complicated color and texture variation, which makes the embedding of invisible and robust digital watermarks difficult. In this research, we propose wavelet-based, threshold-adaptive watermarking scheme (TAWS) which can embed invisible robust watermarks into various kinds of graphical images. TAWS selects significant subbands and inserts watermarks in selected significant coefficients. The inserted watermarks are adaptively scaled by different threshold values to maintain the perceptual integrity of watermarked images and achieve robustness against compression and signal processing attacks. Another major contribution of this work is that the cast watermark is retrieved without the knowledge of the original image. The so-called blind watermark retrieval technique is very useful in managing a large cartoon, trademark and digital map databases. Finally, a company logo that clearly identifies the copyright information can be embedded in cartoon and map images without serious perceptual loss. Experimental results are given to demonstrate the superior performance of TAWS.
Image watermarking concerns embedding information in images, in a manner that does not affect the visual quality of the image. This paper focusses on watermarking of dither halftone images. The basic idea is to use a sequence of two dither matrices (instead of one) to encode the watermark information. Analyzing a specific statistical model of input images, leads to an optimal decoding algorithm in term of the rate- distortion trade-off. Furthermore, we characterize optimal dither matrix pairs (i.e.: dither matrix pairs whose use results in the most favorable rate-distortion). Finally, the results are demonstrated in a synthetic example. The example is synthetic in the sense that it does not resort to printing and re-scanning of the image.
We investigate the use of frequency domain techniques to watermark text documents. A text image is essentially binary and hence contains large high-frequency components. This has several implications on the obtrusiveness and detection performance of frequency domain marking of text images, as illustrated by our extensive experiments. Generally marking is more obtrusive on a text than pictorial image. It almost always creates a 'dirty' background. 'Cleaning' the background by thresholding light grays to white renders the watermark less obtrusive but also sharply reduces the detector response, making it unrobust against noise. Both text and pictorial images seem very susceptible to shifting; this contrasts with extreme robustness against shifting of spatial domain marking through line or word shifting. Finally, we explore the combination of spatial domain marking and frequency domain detection and present preliminary experimental results on the combined approach.
This paper presents a watermarking algorithm suitable for embedding private watermarks into three dimensional polygon based models. The algorithm modifies the models normal distribution to store information solely in the geometry of the model. The watermarks show significant robustness against mesh simplifying methods.
We consider the problem of embedding one signal (e.g., a digital watermark), within another 'host' signal to form a third, 'composite' signal. The embedding must be done in such a way that minimizes distortion between the host signal and composite signal, maximizes the information-embedding rate, and maximizes the robustness of the embedding. In general, these three goals are conflicting, and the embedding process must be designed to efficiently trade-off the three quantities. We propose a new class of embedding methods, which we term quantization index modulation (QIM), and develop a convenient realization of a QIM system that we call dither modulation in which the embedded information modulates a dither signal and the host signal is quantized with an associated dithered quantizer. QIM and dither modulation systems have considerable performance advantages over previously proposed spread-spectrum and low-bit(s) modulation systems in terms of the achievable performance trade-offs among distortion, rate, and robustness of the embedding. We also demonstrate these performance advantages in the context of 'no-key' digital watermarking applications, in which attackers can access watermarks in the clear. We also examine the fundamental limits of digital watermarking from an information theoretic perspective and discuss the achievable limits of QIM and alternative systems.
Watermark recovery is often based on cross-correlating images with pseudo-noise sequences, as access to un-watermarked originals is not required. Successful recovery of these watermarks is determined by the (periodic or aperiodic) sequence auto- and cross-correlation properties. This paper presents several methods of extending the dimensionality of 1D sequences in order to utilize the advantages that this offers. A new type of 2D array construction is described, which meets the above requirements. They are constructed from 1D sequences that have good auto-correlation properties by appending rows of cyclic shifts of the original sequence. The sequence values, formed from the roots of unity, offer additional diversity and security over binary arrays. A family of such arrays is described which have low cross-correlation and can be folded and unfolded, rendering them robust to cryptographic attack. Row and column products of 1D Legendre sequences can also produce equally useful 2D arrays (with interesting properties resulting from the Fourier invariance of Legendre sequences). A metric to characterize all these 2D correlation based watermarks is proposed.
Due to the drastic development of Internet, it has recently been a critical problem to secure multimedia contents against illegal use. In order to solve this problem, data hiding has drawn great attention as a promising method that plays a complementary role to conventional cryptographic techniques. The idea of this approach is found in ancient Greek literature as 'Steganography,' which means a 'covered writing' for special secret communication. This paper presents a new method for steganographic image transformation, which is different from conventional data hiding techniques. The transformation is achieved in frequency domain and the concept of Fourier filtering method is used. An input image is transformed into a fractal image, which can be used in Computer Graphic (CG) applications. Unauthorized users will not notice the 'secret' original image behind the fractal image, but even if they know that there is a hidden image it will be difficult for them to estimate the original image from the transformed image. Only authorized users who know the proper keys can regenerate the original image. The proposed method is applicable not only as a security tool for multimedia contents on web pages but also as a steganographic secret communication method through fractal images.
State of the art audio coders exploit the redundancy in audio signals by shaping their quantization noise below the signal's masking curve, which is a signal-dependent threshold of audibility. This framework can be extended to the context of data hiding, where the data play the role of noise. To minimize audio distortion the data power should be closely adapted to the time-varying masking curve; each power switch, however, reduces the net throughput via its associated side information. This tradeoff can be cast in a rate/distortion framework: the optimal sequence of power levels and the optimal sequence of power switchpoints is found by minimizing a Lagrange cost functional relating perceptual audio distortion to throughput, and is implemented as a linear-time trellis search. For 16-bit, 44.1 KHz PCM stereo signals, a net throughput of the order of 30 kbits/sec can usually be achieved at a no perceptual cost in an algorithmically efficient way.
A new technique for embedding image data that can be recovered in the absence of the original host image, is presented. The data to be embedded, referred to as the signature data, is inserted into the host image in the DCT domain. The signature DCT coefficients are encoded using a lattice coding scheme before embedding. Each block of host DCT coefficients is first checked for its texture content and the signatured codes are appropriately inserted depending on a local texture measure. Experimental results indicate that high quality embedding is possible, with no visible distortions. Signature images can be recovered even when the embedded data is subject to significant lossy JPEG compression.
A method has been developed to hide one image inside another with little loss in image quality. If the second image is a logo or watermark, then this method may be used to protect the ownership rights of the first image and to guarantee the authenticity of the image. The two images to be combined may be either black & white or color continuous tone images. A reversible image is created by incorporating the first image in the upper 4 bits and the second image in the lower 4 bits. When viewed normally, the reversible image appears to be the first image. To view the hidden image, the bits of the combined image are reversed, exchanging all of the lower and higher order bits. When viewed in the reversed mode, the image appears to be the second or hidden image. To maintain a high level of image quality for both images, two simultaneous error diffusion calculations are run to ensure that both views of the reversible image have the same visual appearance as the originals. Any alteration of one of the images locally destroys the other image at the site of the alterations. This provides a method to detect alterations of the original image.
Digital watermarking has been recently proposed as the mean for intellectual property right protection of multimedia data. We present some ways to 'visualize' the invisible watermarks, both statistically and perceptually, for proving the ownership. A system which is capable of embedding a good resolution meaningful binary watermark image and later extracting different versions of that watermark image with varying resolutions is proposed. The system has the nice feature that the watermark detector (rather than encoder) is allowed to adaptively choose the trade-off between robustness degree and resolution of the extracted watermark image. It takes advantage of the high spatial correlation of the watermark image and the human visual system's super ability to recognize a correlated pattern to enhance the detection performance. While a statistical technique which can quantify the false alarm detection probability should be considered as a fundamental measure for a valid ownership claim, the ability to extract a meaningful watermark image will greatly facilitate the process of convincing the jury of an ownership claim.
While most researchers are focusing on how to develop a robust and imperceptible watermarking technology, in this paper, we would stress a little in error analysis on recovering modified (attacked) watermarked images with reference to their original images. Here 'recovering' means that firstly we must find out the modification parameters of these images and then transform them back with maximum similar to the original images. We argue that correctly and accurately estimating the transform parameters of the modified watermarked images based on the given original image or some invariant parameters extracted from original image is very useful, helpful even essential for watermarking detection and verification. In this paper, firstly we will briefly introduce several basic image registration techniques. Then we evaluate the availability and limits of which they are directly used for recovering modified watermarked images considering some common attacks on image watermark. Furthermore we propose a method of estimating the transform parameters of modified watermarked image with reference to original image (Or its invariant feature parameters), and then analysis the recovering errors based on the proposed method. After reviewing several typical image watermarking schemes and systems based on different purposes, we outline some quantities for each system in condition that the embedded watermark could be extracted without error. We believe that a slight modification version of the proposed method could be used as a module of image authentication.
This paper explores possible methods to detect the watermark signal. We show how using sequence detection techniques, such as maximum likelihood sequence detection (MLSD), can provide the same performance as conventionally used correlation detection while simplifying the detection process. As a result, we can increase the dimension of the watermark signal (achieving higher capability), and keep the increase in computational complexity manageable. We will also show that with MLSD, we can have a measure of confidence level of the detected signal that is directly dependent on the watermark signal to noise ratio (SNRwm).
An evaluation of the number of bits that can be hidden within an image by means of frequency-domain watermarking is given. Watermarking is assumed to consist in the modification of a set of full-frame DCT (DFT) coefficients. The amount of modification each coefficient undergoes is proportional to the magnitude of the coefficient itself, so that an additive- multiplicative embedding rule results. The watermark-channel is modeled by letting the watermark be the signal and the image coefficients the noise introduced by the channel. To derive the capacity of each coefficient, the input (i.e. the watermark) and the output (i.e. the watermarked coefficients) of the channel are quantized, thus leading to a discrete- input, discrete-output model. Capacity is evaluated by computing the channel transition matrix and by maximizing the mutual input/output information. Though the results we obtained do not take into account attacks, they represent a useful indication about the amount of information that can be hidden within a single image.
In this paper, we discussed the possibility of introducing chaotic sequences into digital watermarking systems as potential substitutes to commonly used m-sequences. Chaotic sequences have several good properties including the availability of a great number of them, the ease of their generation, as well as their sensitive dependence on their initial conditions. And the quantization does not destroy the good properties. We focus our discussion on the discrete-time dynamical systems operating in chaotic state including Chebyshev maps and logistic maps. Both real valued and binary chaotic sequences are studied and experimented with a digital watermarking system similar to the well-known NEC system. Chaotic sequences are used to modulate information bits into white noise-like wideband watermark signals to be added into the cover objects. The robustness against common signal processing and lossy compression and robust test tools like Stirmark of these systems are tested and compared. Preliminary results are satisfactory. However, in this paper, we only test chaotic sequences scheme with digital images. Tests with other media including video and audio signals will be done at next step. And the good properties of chaos will be further explored, both theoretically and experimentally.
We present a simple, efficient, and secure multicast protocol with copyright protection in an open and insecure network environment. There is a wide variety of multimedia applications that can benefit from using our secure multicast protocol, e.g., the commercial pay-per-view multicast, or highly secure military intelligence video conference. Our secure multicast protocol is designed to achieve the following goals. (1) It can run in any open network environment. It does not rely on any security mechanism on intermediate network switches or routers. (2) It can be built on top of any existing multicast architecture. (3) Our key distribution protocol is both secure and robust in the presence of long delay or membership message. (4) It can support dynamic group membership, e.g., JOIN/LEAVE/EXPEL operations, in a network bandwidth efficient manner. (5) It can provide copyright protection for the information provider. (6) It can help to identify insiders in the multicast session who are leaking information to the outside world.
In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.
Electronic publishing faces one major technical and economic challenge, i.e., how to prevent individuals from easily copying and illegally distributing electronic documents. Conventional cryptographic systems permit only valid key- holders access to encrypted data, but once such data is decrypted there is no way to track its reproduction or retransmission. Therefore, they provide little protection against data privacy, in which a publisher is confronted with unauthorized reproduction of information. In this paper, we explore the use of intelligent agent, digital watermark and cryptographic techniques to discourage the distribution of illegal electronic copies and propose an agent-based strategy to protect the copyright of on-line electronic publishing. In fact, it is impossible to develop an absolute secure copyright protection architecture for on-line electronic publishing which can prevent a malicious customer from spending a great deal of efforts on analyzing the software and finally obtaining the plaintext of the encrypted electronic document. Our work in this paper aims at making the value of analyzing agent and removing watermark to be much greater than that of the electronic document itself.
Nowadays the World Wide Web (WWW) is an established service used by people all over the world. Most of them do not recognize the fact that they reveal plenty of information about themselves or their affiliation and computer equipment to the providers of web pages they connect to. As a result, a lot of services offer users to access web pages unrecognized or without risk of being backtracked, respectively. This kind of anonymity is called user or client anonymity. But on the other hand, an equivalent protection for content providers does not exist, although this feature is desirable for many situations in which the identity of a publisher or content provider shall be hidden. We call this property server anonymity. We will introduce the first system with the primary target to offer anonymity for providers of information in the WWW. Beside this property, it provides also client anonymity. Based on David Chaum's idea of mixes and in relation to the context of the WWW, we explain the term 'server anonymity' motivating the system JANUS which offers both client and server anonymity.
The existence of perceptual redundancy of many media carriers such as image and video allows for invisible embedding of an information sequence. Its applications include so-called watermarking and data hiding. The media carrier can be modeled as a communication channel, where noise comes from manipulation of the carrier. To maximize the robustness of the embedded data against the noise, coded modulation is used to encode the information sequence.
Digital watermarking is the enabling technology to prove ownership on copyrighted material, detect originators of illegally made copies, monitor the usage of the copyrighted multimedia data and analyze the spread spectrum of the data over networks and servers. Embedding of unique customer identification as a watermark into data is called fingerprinting to identify illegal copies of documents. Basically, watermarks embedded into multimedia data for enforcing copyrights must uniquely identify the data and must be difficult to remove, even after various media transformation processes. Digital fingerprinting raises the additional problem that we produce different copies for each customer. Attackers can compare several fingerprinted copies to find and destroy the embedded identification string by altering the data in those places where a difference was detected. In our paper we present a technology for combining a collusion-secure fingerprinting scheme based on finite geometries and a watermarking mechanism with special marking points for digital images. The only marking positions the pirates can not detect are those positions which contain the same letter in all the compared documents, called intersection of different fingerprints. The proposed technology for a maximal number d of pirates, puts enough information in the intersection of up to d fingerprints to uniquely identify all the pirates.