PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A digital watermark is a short sequence of information containing an owner identity or copyright information embedded in a way that is difficult to erase. We present a new oblivious digital watermarking technique for copyright protection of still images. The technique embeds the watermark in a subset of low to mid frequency coefficients. A key is used to randomly select a group of coefficients from that subset for watermark embedding. The original phases of the selected coefficients are removed and the new phases are set in accordance with the embedded watermark. Since the coefficients are selected at random, the powers of the low magnitude coefficients are increased to enhance their immunity against image attacks. To cope with small geometric attacks, a replica of the watermark is embedded by dividing the image into sub-blocks and taking the DCT of these blocks. The watermark is embedded in the DC component of some of these blocks selected in an adaptive way using quantization techniques. A major advantage of this technique is its complete suppression of the noise due to the host image. The robustness of the technique to a number of standard image processing attacks is demonstrated using the criteria of the latest Stirmark benchmark test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed the digital watermarking for a color image. In order to embed watermark signal, we consider the characteristics of HVS (human visual system) and focus on the relatively insensitive components of a color image. In YCrCb color space, Y component is achromatic--luminance and both Cr and Cb components are chromatic--color. At the Cr-Cb chrominance plane, an angle of a pixel represents the hue component of a color that refers to its average spectral wavelength and differentiates different colors and a magnitude of a pixel determines the amount of purity of the color. Because the variation of saturation is less sensitive than that of hue, we modify the saturation value--the magnitude in Cr-Cb chrominance plane. On changing the chrominance data, the phase of a point has to be fixed and only the magnitude of the point that represents the saturation is changed based on the acceptable degree of color difference. The proposed digital watermarking method has a good property in the field of invisibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work extends the watermarking method proposed by Kutter et al. to increase the watermark decoding performance for textured or busy images. The proposed algorithm modifies watermark embedding rule to utilize image characteristics, like local standard deviation and gradient magnitude, in order to increase the decoding accuracy for busy images. The method does not need original image for decoding and controls the watermark embedding process at encoder site, resulting in a more accurate decoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large and high-resolution images usually have a high commercial value. Thus they are very good candidates for watermarking. If many images have to be signed in a Client-Server setup, memory and computational requirements could become unrealistic for current and near future solutions. In this paper, we propose to tile the image into sub-images. The watermarking scheme is then applied to each sub-image in the embedding and retrieval process. Thanks to this solution, the first possible optimization consists in creating different threads to read and write the image tile by tile. The time spent in input/output operations, which can be a bottleneck for large images, is reduced. In addition to this optimization, we show that the memory consumption of the application is also highly reduced for large images. Finally, the application can be multithreaded so that different tiles can be watermarked in parallel. Therefore the scheme can take advantage of the processing power of the different processors available in current servers. We show that the correct tile size and the right amount of threads have to be created to efficiently distribute the workload. Eventually, security, robustness and invisibility issues are addressed considering the signal redundancy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A plausible motivation for video tampering is to alter the sequence of events. This goal can be achieved by re-indexing attacks such as frame removal, insertion, or shuffling. In this work, we report on the development of an algorithm to identify and, subject to certain limitations, reverse such acts. At the heart of the algorithm lies the concept of a frame-pair [f,f*]. Frame-pairs are unique in two ways. The first frame is the basis for watermarking of the second frame sometime in the future. A key that is unique to the location of frame f governs frame-pair temporal separation. Watermarking is done by producing a low resolution version of 24-bit frame, spreading it, and then embedding it in the color space of f*. As such, watermarking f* is tantamount to embedding a copy of frame f in a future frame. Having tied one frame, in content and timing, to another frame downstream, frame removal and insertion can be identified and, subject to certain limitations, reversed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital watermarking is a technology for copyright protection and protection against unauthorized access and modification of multimedia material. The most important properties of digital watermarking techniques are robustness, security, imperceptibility/ transparency, complexity, capacity and possibility of verification. Robustness means resistance to 'blind', non-targeted modifications, or common media operations. A transparent watermark causes no artifacts or quality loss. A maximum of robustness cannot be achieved at the same time as a maximum of transparency as a higher robustness requires stronger media modifications. Transparency is based on the properties of the human visual system or the human auditory system. It is an often-neglected part of a watermarking evaluation scheme. We introduce a computer aided visual model based on a visual modulation threshold function which is used to test the degree of transparency of the watermark in watermarked multimedia material using linear system theory. We describe our test environment for the model and show how it is implemented in 5 steps representing the essential parts of the visual model: Sampling, band pass contrast response, oriented response, transducer and distance. The model takes two digital images as the input of the and returns a probability that an observer can distinguish the two pictures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a new video watermarking technique, which utilize the characteristic of temporal redundancy of video sequence to improve the quality and robustness of the watermarked sequence. The proposed watermarking technique can be combined with many existing 2D image-watermarking algorithm to take advantage of their robustness against various attacks. For every watermark bit (formula available in paper), the pseudo random sequence (formula available in paper) is added to the mid-band coefficients of a block and the complementary sequence, (formula available in paper) is added to the same block in another frame of the insertion pair. To retrieve the embedded watermark bit, the block in the first frame is subtracted by the same block of the second frame, resulting in a watermark with double the magnitude (formula available in paper). Since adjacent video frames are highly correlated and the DCT coefficients are almost the same (especially in non-moving regions), the subtraction of the block pair also cancels out most of the interfering DCT coefficients originated from the host signal, such that the interference to the watermark signal from the host signal is minimized during detection. Experiment shows that the probability of receiving an error bit can be reduced or the picture quality is improved while maintaining the same probability of error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital video watermarking algorithm using 3D-DCT and intra-cubic correlation is proposed. More specifically, we divide the video sequence into non-overlapping image cubes and take a 3D-DCT on each image cube. The coefficients in the frequency domain in the cube are randomly selected and partitioned into two equal sets. Then, by referring the user-defined logo, a small value is added to the coefficients in one set while the same amount is subtracted from those of the other set. By taking the difference of mean values of the two sets in each cube, we can extract the watermark bits embedded into the cube. Collecting all watermark bits and visually inspecting the collected image logo, one can assert the copyright of the video. Experimental results show that we can extract over 90% of the binary logo image for various possible attacks such as MPEG compression, frame-rate changes, format conversion and frame skipping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital video data distribution through the internet is becoming more common. Film trailers, video clips and even video footage from computer and video games are now seen as very powerful means to boost sales of the aforementioned products. These materials need to be protected to avoid copyright infringement issues. However, these materials are encoded at a low bit-rate to facilitate internet distribution and this poses a challenge to the watermarking operation. In this paper we present an extension to the Differential Energy Watermarking algorithm, to use it in low bit-rate environment. We present the extension scheme and its evaluate its performance in terms of watermark capacity, robustness and visual impact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for protection mechanisms for multimedia content is widely recognized. In the past digital watermarking algorithms for images have been developed that provide a certain level of protection for colored or gray-scale images. Since classical raster-oriented watermarking algorithms do not satisfy the needs for symbol oriented music score images we present in this paper a solution that should give promising robustness of the watermark at minimal visibility impact. This solution respects the content of binary images and can be considered as a symbolic interpretation and modification of music scores. Some music symbols are used by changing their features for hiding an information string in a music score. The advantage is its robustness and invisibility. Regarding the invisibility a musician should under no circumstances be impeded in reading the music. One must even consider the fact of being influenced unconsciously. For example, it might be more difficult to concentrate on a music sheet if the symbols were changed invisibly. The most probable way of distributing music scores is the analog (paper) form. Music scores are copied and distributed. So watermarks should be readable even after multiple copy procedures. By choosing suitable features a blind detection of the watermark is possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Usually watermark is used as a way for hiding information on digital media. The watermarked information may be used to allow copyright protection or user and media identification. In this paper we propose a watermarking scheme for digital audio signals that allow automatic identification of musical pieces transmitted in TV broadcasting programs. In our application the watermark must be, obviously, imperceptible to the users, should be robust to standard TV and radio editing and have a very low complexity. This last item is essential to allow a software real-time implementation of the insertion and detection of watermarks using only a minimum amount of the computation power of a modern PC. In the proposed method the input audio sequence is subdivided in frames. For each frame a watermark spread spectrum sequence is added to the original data. A two steps filtering procedure is used to generate the watermark from a Pseudo-Noise (PN) sequence. The filters approximate respectively the threshold and the frequency masking of the Human Auditory System (HAS). In the paper we discuss first the watermark embedding system then the detection approach. The results of a large set of subjective tests are also presented to demonstrate the quality and robustness of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe an audio watermarking algorithm that can embed a multiple-bit message which is robust against wow-and-flutter, cropping, noise-addition, pitch-shift, and audio compressions such as MP3. The algorithm calculates and manipulates the magnitudes of segmented areas in the time-frequency plane of the content using short-term DFTs. The detection algorithm correlates the magnitudes with a pseudo-random array that maps to a two-dimensional area in the time-frequency plane. The two-dimensional array makes the watermark robust because, even when some portions of the content are heavily degraded, other portions of the content can match the pseudo-random array and contribute to watermark detection. Another key idea is manipulation of magnitudes. Because magnitudes are less influenced than phases by fluctuations of the analysis windows caused by random cropping, the watermark resists degradation. When signal transformation causes pitch fluctuations in the content, the frequencies of the pseudo-random array embedded in the content shift, and that causes a decrease in the volume of the watermark signal that still correctly overlaps with the corresponding pseudo-random array. To keep the overlapping area wide enough for successful watermark detection, the widths of the frequency subbands used for the detection segments should increase logarithmically as frequency increases. We theoretically and experimentally analyze the robustness of proposed algorithm against a variety of signal degradations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video streaming, or the real-time delivery of video over a data network, is the underlying technology behind many applications including video conferencing, video-on-demand, and the delivery of educational and entertainment content. In many applications, particularly ones involving entertainment content, security issues, such as conditional access and copy protection must be addressed. To resolve these security issues, techniques that include encryption and watermarking need to be developed. Since the video sequences will often be compressed using a scalable compression technique and transported over a lossy packet network using the Internet Protocol, the security techniques must be compatible with the compression method and data transport and be robust to errors. In this paper, we address the issues involved in the watermarking of rate-scalable video streams delivered using a practical network. Watermarking is the embedding of a signal (the watermark) into a video stream that is imperceptible when the stream is viewed but can be detected by a watermark detector. Many watermarking techniques have been proposed for digital images and video, but the issues of streaming have not been fully investigated. A review of streaming video is presented, including scalable video compression and network transport, followed by a brief review of video watermarking and the discussion of watermarking streaming video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient encryption algorithms are essential to multimedia data security, since the data size is large and real-time processing is often required. After discussing limitations of previous work on multimedia encryption, we propose a novel methodology for confidentiality, which turns entropy coders into encryption ciphers by using multiple statistical models. The choice of statistical models and the order in which they are applied are kept secret as the key Two encryption schemes are constructed by applying this methodology to the Huffman coder and the QM coder. It is shown that security is achieved without sacrificing the compression performance and the computational speed. The schemes can be applied to most modern compression systems such as MPEG audio, MPEG video and JPEG/JPEG2000 image compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In today's digital world, multimedia content is delivered to homes via the Internet, satellite, terrestrial and cable networks. Scrambling is a common approach used by conditional access systems to prevent unauthorized access to audio/visual data. The descrambling keys are securely distributed to the receivers in the same transmission channel. Their protection is an important part of the key management problem. Although public-key cryptography provides a viable solution, alternative methods are sought for economy and efficiency. This paper presents a key transport protocol based on secret sharing. It eliminates the need for a cipher, yet combines the advantages of symmetric and public-key ciphers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate the restoration of geometrically altered digital images with the aim of recovering an embedded watermark information. More precisely, we focus on the distorsion taking place by the camera acquisition of an image. Indeed, in the cinema industry, a large part of early movie piracy comes from copies made in the theater itself with a camera. The evolution towards digital cinema broadcast enables watermark based fingerprinting protection systems. The first step for fingerprint extraction of a counterfeit material is the compensation of the geometrical deformation inherent to the acquisition process. In order to compensate the deformations, we use a modified 12-parameters bilinear transformation model which closely matches the deformations taking place by an analog acquisition process. The estimation of the parameters can either be global, either vary across regions within the image. Our approach consist in the estimation of the displacement of a number of of pixels via a modified block-matching technique followed by a minimum mean square error optimization of the parameters on basis of those estimated displacement-vectors. The estimated transformation is applied to the candidate image to get a reconstruction as close as possible to the original image. Classical watermark extraction procedure can follow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the following presentation, the case of a preexistent database, containing objects which describe the owner of a multimedia element and image's features, will be examined. A trusted party adds this information to an entry in its database and performs a harsh over the data of the multimedia element. Additional operations will be performed to extract the main features of the multimedia element.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been a vast increase in the accumulation and communication of digital computer data in both the private and public sectors, much of this information has a significant value and requires protection. Encryption is an effective measure to be used in data security applications. With proper management controls, adequate implementation specifications, and applicable usage guidelines, data encryption no only aid in protection on data communication but can provide protection for a myriad of specific data processing applications. DSP is becoming more and more popular for their fast execution of instruction, wide applicability and relative low cost. Using DSP as processing unit to implement complex data encryption algorithms is a good idea. We developed a DSP based Data Encryption Communication System (D-DECS in abbreviation) to implement real-time data security applications. This paper not only presents a new architecture-- Single-Program and Multiple-Data stream (SPMD in abbreviation) architecture to build D-DECS, but also give out the complete features about hardware and software of the D-DECS we developed. Finally, we achieved a result that the D-DECS based on SPMD architecture has both good performance and nice flexibility to meet various requirements under different situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fragile watermarking addresses the recognition of manipulations. In the moment most fragile watermarks are very sensitive to changes. This is of interest for parties to verify that an image has not been edited, damaged, or altered since it was marked. However in many applications we have to expect several allowed post production editing processes which do not manipulate the content of the image or video data. The semi-fragile schemes address this problem. These techniques are moderately robust and the value identifying the presence of the watermark can serve as a measure of tampering. Unfortunately these schemes cannot recognize if the content or the message of the media was effected or manipulated. Approaches are necessary which can distinguish malicious changes from innocent image processing operations. Such techniques can be termed authentication of the visual content. We propose to extract the visual content, called feature, and embed the content features with a robust watermarking scheme into the image data, called content-fragile watermarking. An integrity decision is based on the extracted feature of the actual image and the embedded watermarking features. Our idea is based on edge characteristics, which are already known and used in several other approaches as content features. To minimize the length of the content feature we introduce and compare five new methods to encode the edge characteristic: two edge shape based feature codes, three statistical feature codes. Our first test results on a selected amount of images show that content fragile watermarking based on robust watermarking and feature extraction allows post production processes and recognizes manipulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many multimedia applications, there is a need to authenticate a source that has been subjected to benign degradations in addition to potential tampering attacks. We develop one information-theoretic formulation of this problem, and identify and interpret the associated fundamental performance limits. A consequence of our results is that there is a tradeoff between embedding distortion and robustness to channel noise, but no such tradeoff between these parameters and security to forgery. To develop some intuition, we outline a sphere packing analogy and show that the results from sphere packing and information theory have the same form. One important benefit of our framework is a coherent way to analyze and design authentication schemes for general source models, distortion metrics, and noisy channel models. We illustrate this by an example construction of a realizable authentication scheme. An important ingredient of our construction is the use of forward error correcting codes. We show that the application of fairly simple codes decreases the embedding distortion required by more than 5 dB without decreasing security or robustness. Our approach is general enough to be used in a wide variety of applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present two new methods for authentication of digital images using invertible watermarking. While virtually all watermarking schemes introduce some small amount of non-invertible distortion in the image, the new methods are invertible in the sense that, if the image is deemed authentic, the distortion due to authentication can be removed to obtain the original image data. Two techniques are proposed: one is based on robust spatial additive watermarks combined with modulo addition and the second one on lossless compression and encryption of bit-planes. Both techniques provide cryptographic strength in verifying the image integrity in the sense that the probability of making a modification to the image that will not be detected can be directly related to a secure cryptographic element, such as a has function. The second technique can be generalized to other data types than bitmap images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One major application domain for digital watermarks is copyright protection. Besides the design of watermarking algorithms, technologies for copyright holder identification have to be investigated. To ensure authenticity of an individual person, a wide number of biometric procedures exist. We define and describe new biometric watermarks, which denote the application of biometric reference data of individuals within digital watermarks to identify and verify ownership. Amongst the two classes of physiological and senso-motoric biometric schemes, the later appears more appropriate for biometric watermarks, as only these provide implicit expressions of intention. As such, we choose on-line handwriting as an appropriate base technology for our three new scenarios in biometric watermarking. In the first approach, embedding keys are being generated from biometric reference data, which requires stable and robust features and leads to rather complex keys. To overcome the complexity boundaries, the second approach develops a biometric reference hash, allowing key look-ups in key certifying servers. Although this proceeding leads to less complex keys, it still requires stable features. The third approach describes the embedding of biometric reference data within a watermark, allowing owner verification by more variant features, but limitations apply due to capacity of watermarking systems and also protection of the reference data is required. While most handwriting-based verification systems are limited to signature contexts, we discuss two additional context types for user authentication: passphrases and sketches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With a rapid growth of digital image distributions, there has been a corresponding surge in digital counterfeiting of confidential documents. Secrecy of documents and copyright protection techniques have been introduced in an attempt to address this growing concern. Recently, digital image watermarks have been proposed to copyright image data. In this paper a new Digital Image Authentication System is proposed. This system is designed to detect any small change made in an image, including any alteration in pixel values or image size. It uses a watermarking scheme, which is based on correlation coefficient statistics, the Secure Hash Algorithm (SHA-1), and the Elliptic Curve Digital Signature algorithm (ECDSA). The generated signature is then embedded in the image spatial domain. Experimental results show that this system has the ability to accurately authenticate images, while preserving their quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, a number of authentication schemes have been proposed for multimedia data such as images and sound data. They include both label based systems and semifragile watermarks. The main requirement for such authentication systems is that minor modifications such as lossy compression which do not alter the content of the data preserve the authenticity of the data, whereas modifications which do modify the content render the data not authentic. These schemes can be classified into two main classes depending on the model of image authentication they are based on. One of the purposes of this paper is to look at some of the advantages and disadvantages of these image authentication schemes and their relationship with fundamental limitations of the underlying model of image authentication. In particular, we study feature-based algorithms which generate an authentication tag based on some inherent features in the image such as the location of edges. The main disadvantage of most proposed feature-based algorithms is that similar images generate similar features, and therefore it is possible for a forger to generate dissimilar images that have the same features. On the other hand, the class of hash-based algorithms utilizes a cryptographic hash function or a digital signature scheme to reduce the data and generate an authentication tag. It inherits the security of digital signatures to thwart forgery attacks. The main disadvantage of hash-based algorithms is that the image needs to be modified in order to be made authenticatable. The amount of modification is on the order of the noise the image can tolerate before it is rendered inauthentic. The other purpose of this paper is to propose a multimedia authentication scheme which combines some of the best features of both classes of algorithms. The proposed scheme utilizes cryptographic hash functions and digital signature schemes and the data does not need to be modified in order to be made authenticatable. Several applications including the authentication of images on CD-ROM and handwritten documents will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital water marking, the technique for hiding information into multimedia contents, is attracting large attentions in the recent years. This paper proposes a new water marking technique for 3D geometric models. The new algorithm retriangulates a part of a triangle mesh and embeds the watermark into the positions of the newly added vertices. Up to 8 bytes data can be invisibly embedded into an edge of the triangle mesh without causing any changes to the geometry of the original 3D model. The embedded watermark resists affine transformation and can be extracted only from the stego-model without using the original cover model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an analysis of feature-based geometry invariant watermarking algorithms. A discussion of the requirements on each building block is followed by potential solutions to meet these requirements. Furthermore, we present theoretical and practical limitations of these solutions via examples. In particular, segmentation based feature point extractors and triangulation based elementary patch formations are evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Asymmetric schemes belong to second generation of watermarking. Whereas their need and advantage are well understood, many doubts have been raised about their robustness and security. Four different asymmetric schemes have been proposed up to now. Whereas they were seemingly relying on completely different concepts, they share the same performances. Exploring in detail these concepts, the authors propose a common formulation of the four different detector processes. This allows to stress common features about security of asymmetric schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new asymmetric watermarking system. Digital watermarking is a method to imperceptibly embed certain information in digital multimedia data. Watermarking systems are classified as symmetric and asymmetric ones. The former uses the same key for encoding and decoding and is suitable for private applications in the sense that the key has to be maintained private among the authorized embedder and detectors. The latter, which uses different keys for encoding and decoding, is for public applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The two main objectives of this paper are: (1) to better define the public-key (PK) watermarking problem -- in terms of properties, design requirements and usage, and (2) to propose one solution to the problem by using neural networks functions. Our survey of public key watermarking begins with the review of the state of the art. Different aspects of PK watermarking are then discussed, among which: basic robustness properties, usage of PK systems, attacks on the public and secret detectors, types of PK strategies, and strong vs weak PK watermarking systems. Accordingly, a PK system using multi-layers neural networks (NN) functions is proposed to match many PK system requirements. The approach is shortly presented for the linear case. Theoretical results are given, showing that it is possible to design PK systems approaching the detection performances of secret key watermarking-- a very unusual feature of PK systems. Experimental results are given on both simulated signals and image, confirming the predicted results and showing great resistance to JPEG compression. The paper ends with openings for new research directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital steganography is the art of secretly hiding information inside a multimedia signal in such a way that its very existence is concealed. In this paper, we present a new steganographic technique for covert communications. The technique embeds the hidden information in the transform domain after decorrelating the image samples in the spatial domain using a key. This results in a significant increase in the number of transform coefficients that can be used to transmit the hidden information, and therefore, increases the data embedding capacity. The hidden information is embedded in the transform domain after taking a block DCT of the decorrelated image. A quantization technique is used to embed the hidden data. The decoding process requires the availability of the same key that was used to decorrelate the image samples. By using quantization techniques, the hidden information can be recovered reliably. If the key is not available at the decoder it is impossible to recover the hidden information. Hence, this system is secure against removal attacks. The statistical properties of the cover and the stego image remain identical for small quantization steps. Therefore, the hidden data cannot be detected. The data embedding system is modeled as transmitting information through a Gaussian channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
JPEG is a common image format in the world wide web. JPEG-compressed images can be used to hide data for secret internet communication and simply any auxiliary data. In this paper, we propose an algorithm called J-Mark to embed invisible watermark information into JPEG compressed images in the compress domain. There are three parts of J-Mark: block selection, DCT coefficient selection, and modification of selected DCT coefficients. Only the texture blocks with significant masking properties are selected in block selection. Only the DCT coefficients with significant energy in the selected blocks are selected. The watermark data are embedded as the 'randomized parity' in the selected DCT coefficients. The embedded data can be recovered perfectly in the compressed domain without fully decoding the JPEG image. Experiment results suggest that the proposed J-Mark can hide the watermarking data without detectable visual artifacts. Although the data hiding capacity differs among images, some parameter of J-Mark can be used to achieve tradeoff between data hiding capacity and visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the wide spread of internet, image databases with large amount of information are used in various applications such as product briefing, personal information provider, and so on. These image databases show some difficulty in retrieving the information in real-time whereas other databases using the text information are retrieved relatively fast. In this paper, image retrieval method is proposed, in which the image is extracted using the hidden text information related to it. By invisible modification of images, the images are retrieved without any headers or separate files linked to them. This paper presents the robust data hiding method by separating the image into the edge and non-edge regions. The sum of the 8 X 8 image block is quantized by adding the pseudo noise depending on the data bit. The amount of the pseudo noise is controlled adaptively based on the gradient magnitude of the image bock. Real-time extraction of extra data by the proposed algorithm is desirable for the practical image retrieval system. Experiments with various test image sets show that the proposed data embedding algorithm is robust to joint photographic experts group compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we propose new multiresolution techniques for data embedding in imagery. The wavelet extrema of the image are exploited to embed data. We use the wavelet extrema of the dyadic non-orthogonal wavelet transform. These extrema represent high frequency points in the image, hence modifications in their neighborhoods have minor visual distortion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Internet bandwidth is in high demand, and one way that web sites lower the amount of bandwidth they use is by compressing their site's images. This lowers the amount of bandwidth used, and makes the site load much faster. There are of course many other useful applications for compressed images. Bit Plane Complexity Segmentation (BPCS) digital picture steganography is a technique to hide data inside an image file. BPCS achieves high embedding rates with low distortion based on the theory that noise-like regions in a bit-plane can be replaced with noise-like secret data without discernible loss in image quality. This is possible because the human eye, while very good at distinguishing anomalies in areas of homogenous texture, is bad at distinguishing anomalies in visually complex areas. However, BPCS is not a robust embedding scheme, and any lossy compression usually destroys the data. Wavelet image compression using the Discreet Wavelet Transform (DWT) is the basis of many modern compression schemes. The coefficients generated by certain wavelet transforms have many image-like qualities. These qualities can be exploited to allow BPCS to be performed on the coefficients. The results can then be losslessly encoded, combining the good compression of the DWT with the high embedding rates of BPCS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Watermarking algorithms for copyright protection are usually classified as belonging to one of two classes: detectable and readable. The aim of this paper is to present a possible approach for transforming an optimum, detectable technique previously proposed by the authors into a readable one. Similarly to what has been done previously by other authors we embed multiple copies of the watermark into the image, letting their relative positions in the frequency domain to be related to the informative message. The main drawback of this approach is that all copies of the watermark have to be detected without knowing their positions, i.e. all possible positions (many tenth thousands in our case) have to be tested, which is a prohibitive task from the point of view of the computational cost. Correlation based watermark detectors can overcome this problem by exploiting the Fast Fourier Transform (FFT) algorithm, but they are not optimum in the case of non additive watermarks. In this paper we demonstrate how the formula of the optimum watermark detector can be re-conducted to a correlation structure, thus allowing us to use the FFT for testing the watermark presence at all possible positions: in this way a fast optimum decoding system is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A secure Internet infrastructure and IBM image watermarking technology have been integrated for the production and authentication of duplication-resistant hard copy documents that may be transmitted to remote sites before being printed. Envisioned applications include the issuance of certificates, contracts, public records, receipts, coupons, ...even college transcripts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel halftone image data hiding method called Data Hiding in Block Parity (DHBP), which hides the data in the block-sum parity. DHBP assumes that the original multi-tone image is available and the halftoning method is ordered dithering. DHBP can hide a relatively large amount of invisible watermarking data in halftone images while retaining good visual quality. In DHBP, one bit of information is hidden in a block of size MxN by forcing the parity of the sum of the MN pixels to be even or odd according to the data bit to be embedded. To alter the parity, one out of the MN pixels is chosen by minimizing the local image intensity change. Some custom- made quality measures are proposed to evaluate DHBP. Simulation results suggest that DHBP can hide a large amount of data while maintaining good visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the proliferation of digital media such as digital images, digital audio, and digital video, robust digital watermarking and data hiding techniques are needed for copyright protection, copy control, annotation, and authentication. While many techniques have been proposed for digital color and grayscale images, not all of them can be directly applied to binary text images. The difficulty lies in the fact that changing pixel values in a binary document could introduce irregularities that are very visually noticeable. We propose a new method for data hiding in binary text documents by embedding data in the 8-connected boundary of a character. We have identified a fixed set of pairs of five-pixel long boundary patterns for embedding data. One of the patterns in a pair requires deletion of the center foreground pixel, whereas the other requires the addition of a foreground pixel. A unique property of the proposed method is that the two patterns in each pair are dual of each other -- changing the pixel value of one pattern at the center position would result in the other. This property allows easy detection of the embedded data without referring to the original document, and without using any special enforcing techniques for detecting embedded data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visible watermarking schemes are common IPR protection mechanisms for digital images and videos that have to be released for certain purposes but illegal reproductions of them are prohibited. Digital data embedded with visible watermarks will contain recognizable but unobtrusive copyright patterns, and the details of the host data are supposed to exist. The embedded pattern of a useful visible watermarking scheme should be difficult or impossible to be removed unless exhaustive and expensive labors are involved. In this paper, we propose a general attacking scheme against current visible image watermarking techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Watermark attacks are first categorized and explained with examples in this paper. We then propose a new image watermark attack called 'Pixel Reallocation Attack'. The proposed attack is a hybrid approach, which aims to decorrelate the embedded watermark with the original watermark. Since many watermarking detections are by correlating the testing image with the target watermark, it will not work once we decorrelate the embedded watermark. For example, the geometrical transformation attacks desynchronize the correlation detector with the testing image leading to detection failure. However, by inserting a template or grid into the watermarked image can make inverse transformation possible and the watermark can be retrieved. If we apply transformations to every single pixel locally, independently and randomly, inverse transformation will not be possible and the attack will be successful since the embedded watermark is not correlated with the original watermark. Experiment shows that single technique approach needs a larger distortion to the image in order to attack the image successfully. We also tested our attack with commercial watermarking software. It cannot detect the watermark after we applied the proposed hybrid attack to the watermarked image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new attack, called the watermark template attach, for watermarked images. In contrast to the Stirmark benchmark, this attack does not severely reduce the quality of the image. This attack maintains, therefore, the commercial value of the watermarked image. In contrast to previous approaches, it is not the aim of the attack to change the statistics of embedded watermarks fooling the detection process but to utilize specific concepts that have been recently developed for more robust watermarking schemes. The attack estimates the corresponding template points in the FFT domain and then removes them using local interpolation. We demonstrate the effectiveness of the attack showing different test cases that have been watermarked with commercial available watermark products. The approach presented is not limited to the FFT domain. Other transformation domains may be also exploited by very similar variants of the described attack.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The loss of synchronization caused by geometrical modifications of an image, such as cropping, rotation and scaling, increases the difficulty of watermark detection, especially for block-based watermarking schemes. In this research, we consider an algorithm to embed an invisible grid structure into watermarked images to overcome this problem. A fixed-size 2D pseudo-random pattern is repeatedly embedded along horizontal and vertical directions of an image after the watermark is embedded in the image. In watermark detection, the affine matrix as well as horizontal/vertical shifts associated with certain geometrical attacks are determined by calculating the auto- correlation of the extracted grid structure and the cross- correlation between the folded grid and the embedded pattern. Synchronization is then recovered, and the watermark can be more easily detected. The applicability and advantages of the proposed algorithm are demonstrated by experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a hybrid digital watermarking scheme for copyright protection purposes. Our scheme is based on a multi-resolution wavelet analysis. For a robust watermarking, the watermark must be embedded in the lowest possible spatial frequencies. However, a problem with this is that very low frequencies modifications are difficult to implement without a perceptual distortion in the original image during the watermarking process. We found that wavelet transform is a good tradeoff for this problem. Wavelet transform is known for its good localization in time and frequency domains, a feature that allows for embedding of the watermark in any particular frequency-band and/or orientation. By selecting an appropriate frequency band at which we fuse the watermark, robustness can be achieved for both applications of copyright-protection and tamper-proofing. To evaluate our algorithm, we demonstrate the effects of various image distortions and attack operators such as mean filtering and JPEG compression. Our results indicate that the proposed algorithm is more robust when compared to other algorithms in the same category. In addition, our watermarked images did not show any visible artifacts, and the original image is not required to extract the watermark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The invisibility, robustness, and capacity of watermarks are critical for data authentication. This paper proposes a watermarking scheme taking advantage of both AC and DC coefficients of DCT, to increase the robustness of watermarks in resisting different attacks caused by processing or degradation. In this scheme, watermark components are inserted into DC coefficient and some lower frequency AC coefficients. This enlarges the capacity of watermarks embedded and at the same time keeps the embedded watermarks invisible. The algorithms for embedding both meaningful and meaningless watermarks and detecting these watermarks are described in detail. The image is first divided into blocks and each block is classified into 3 different categories according to its light and texture characteristics. The embedded watermarks are then distributed into both AC and DC coefficients to get a suitable compromise between invisibility and robustness. Some experimental results with real images are presented and they demonstrate that the watermarks generated with the proposed algorithms are more robust against noise and commonly used image-processing techniques than the watermarks generated by using only DC or AC coefficients. The capacity enabled by the proposed algorithms is also bigger than that allowed by the algorithms using no DC coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a content based watermarking technique using the MPEG-7 texture descriptor. In the proposed technique, content description technique of MPEG-7 is adopted to watermarking technique. Using multimedia features described by MPEG-7 standard, we propose a watermarking technique where MPEG-7 texture descriptors are regarded as perceptually significant portions. In this paper, the MPEG-7 texture descriptor is utilized to find perceptually significant portion of data for the watermarking. The positions where the watermark is inserting are chosen according to the texture descriptor in DFT domain. In the detection, the texture descriptor is also utilized so that some frequency regions are weighted in the detection procedure. Experimental results show that the proposed method outperforms the conventional transformed watermarking techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an adaptive 2D image watermarking algorithm in DCT domain. In order to embed an image watermark, we split the image watermark into blocks and transform them into DCT domain. Then these DCT coefficients are quantized and adjusted. Finally, we choose the non-zero DCT coefficients of each block to constitute the watermark. We split the cover image into blocks and classify these blocks based on the human visual system, and transform them into DCT domain. According to the classification, the watermark components are enhanced with different intensity and are embedded into some low-frequency DCT coefficients of the cover image. The experimental results are excellent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Masking models are mostly used in data compression algorithms and serve to shape the quantization noise. They were introduced in watermarking as to indicate the regions where the watermark could be introduced without perceptible artifacts. This allowed to embed more watermark energy, for a given absolute distortion constraint, than if no mask is used. Yet, little attention has been paid to the consequences of using these masks with respect to detection performance. In this work, it is shown that blind use of masking models facilitates the attacker's role, and eventually results in severe decreases of detection statistic at the detector, even for reasonable attack distortions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In digital watermarking, one aim is to insert the maximum possible watermark signal without significantly affecting image quality. Advantage can be taken of the masking effect of the eye to increase the signal strength in busy or high contrast image areas. The application of such a human visual system model to watermarking has been proposed by several authors. However if a simple contrast measurement is used, an objectionable ringing effect may become visible on connected directional edges. In this paper we describe a method which distinguishes between connected directional edges and high frequency textured areas, which have no preferred edge direction. The watermark gain on connected directional edges is suppressed, while the gain in high contrast textures is increased. Overall, such a procedure accommodates a more robust watermark for the same level of visual degradation because the watermark is attenuated where it is truly objectionable, and enhanced where it is not. Furthermore, some authors propose that the magnitude of a signal which can be imperceptibly placed in the presence of a reference signal can be described by a non-linear mapping of magnitude to local contrast. In this paper we derive a mapping function experimentally by determining the point of just noticeable difference between a reference image and a reference image with watermark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Good watermark needs to be perceptually invisible, undetectable without key and robust to spatial/temporal data modification. In this paper, we utilize the characteristics of the human visual system (HVS) where the response of visual cortex decomposes the image spectra into perceptual channels that are bands in spatial frequency. In the HVS, the spatial frequency domain is divided into octave-bands division in the radial axis. Based on the octave-bands division, a watermark is inserted into the channels which have same number of bits of the watermark. In doing so, more bits of the watermark are embedded into significant portions, which locate in the low-middle frequency region in the frequency domain. The above watermark insertion scheme gives a robustness because the watermark of significant portion can resist against strong attack of image. Experimental results show that the proposed method based on HVS method gives more robustness to the attacks compared with the conventional DCT, wavelet and DFT watermarking methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of evaluating the maximum number of information bits that can be hidden within an image is considered; usually it is addressed by looking at the watermarking process as a communication task, in which the signal, i.e. the watermark, is transmitted over the channel, the host data acts the part of. Thus the maximum number of information bits is the capacity of the watermark-channel. By relying on experimental results in which the dependence of the watermark capacity upon the watermark strength G is evidenced, the knowledge of the maximum allowed watermark level, under the constraint of watermark invisibility, is required. G is often interactively adjusted to the image at hand, because no simple algorithm exists that permits to fit the watermark level according to the characteristics of the host image. Hence, a novel algorithm to model the Human Visual System has been developed which considers frequency sensitivity, local luminance and contrast masking effects. The proposed method exploits a block based DCT decomposition of the image, that permits to trade off between spatial and frequency localisation of the image features and disturbs. Through this model, the maximum allowable watermark strength is determined in a completely automatic mode and then the value of watermark capacity is computed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work presented here deals with watermarking algorithms. The goal is to show how the Human Visual System (H.V.S) properties can be taken into account in the conception of such algorithms. The construction of the watermarking algorithm presented in this paper needs three steps. In the first one the selection of auspicious sites for the watermark embedding is described. The selection exploits a multi-channel model of the Human Visual System which decomposes the visual input into seventeen perceptual components. Medium and high frequencies are then selected to generate a sites map. This latter is improved by considering some high level uniform areas. The second step deals with the choice of the strength to apply to the selected sites. The strength is determined by considering the H.V.S. sensitivity to the local band limited contrast. In the third step, examples of spatial watermarking embedding and extraction are given. The same perceptual mask has been successfully used in other studies. The watermark results from a binary pseudo-random sequence, of length 64, which is circularly shifted so as to occupy all the sites mentioned above. The watermark extraction exploits the detection theory and requires both the perceptual mask and the original watermark. The extracted watermark is then compared to the original and a normalized correlation coefficient is computed. This coefficient value allows the detection of the copyright.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we will provide an overview of the wavelet-based watermarking techniques available today. We will see how previously proposed methods such as spread-spectrum watermarking have been applied to the wavelet transform domain in a variety of ways and how new concepts such as the multi-resolution property of the wavelet image decomposition can be exploited. One of the main advantages of watermarking in the wavelet domain is its compatibility with the upcoming image coding standard, JPEG2000. Although many wavelet-domain watermarking techniques have been proposed, only few fit the independent block coding approach of JPEG2000. We will illustrate how different watermarking techniques relate to image compression and examine the robustness of selected watermarking algorithms against image compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider a data hiding channel in this paper that is not perfectly known by the encoder and the decoder. The imperfect knowledge could be due to the channel estimation error, time-varying active adversary etc. A mathematical model for this scenario is proposed. Many important attacks such as scaling, geometrical transformations etc. fall under the proposed mathematical model. Minimal assumptions are made regarding the probability distributions of the data-hiding channel. Lower and upper bounds on the data hiding capacity are derived. It is shown that the popular additive Gaussian noise channel model may not suffice in real-world scenarios; the capacity estimates using the additive Gaussian channel model tend to either over- or under-estimate the capacity under different scenarios. Asymptotic value of the capacity as the signal to noise ratio becomes arbitrarily large is also given. Many existing data hiding capacity estimates are observed to be a special case of the formulas derived in this paper. We also observe that the proposed mathematical model can be applied to real-life applications such as data hiding in image/video. Theoretical results are further explained using numerical values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present techniques for steganalysis of images that have been potentially subjected to a watermarking algorithm. Our hypothesis is that a particular watermarking scheme leaves statistical evidence or structure that can be exploited for detection with the aid of proper selection of image features and multivariate regression analysis. We use some sophisticated image quality metrics as the feature set to distinguish between watermarked and unwatermarked images. To identify specific quality measures, which provide the best discriminative power, we use analysis of variance (ANOVA) techniques. The multivariate regression analysis is used on the selected quality metrics to build the optimal classifier using images and their blurred versions. The idea behind blurring is that the distance between an unwatermarked image and its blurred version is less than the distance between a watermarked image and its blurred version. Simulation results with a specific feature set and a well-known and commercially available watermarking technique indicates that our approach is able to accurately distinguish between watermarked and unwatermarked images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of data hiding techniques in still images may be greatly improved by means of coding. In previous approaches repetition coding was firstly used to obtain N identical Gaussian channels over which block and convolutional coding were used. But knowing that repetition coding can be improved we may turn our attention to coding forwardly at the sample level. Bounds for both hard and soft decision decoding performance are provided and the use of concatenated coding and turbo coding for this approach is explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a non-linear method of embedding a signature in colour and monochrome images and demonstrate its recovery. The embedding process can be viewed as pseudo-random perturbations to angles between vector elements. The derived angles are dithered by the addition of a watermark, and encoded as a pseudo-noise sequence of dither angle offsets. This is followed by a re-quantisation for storage or transmission. The dither angles are recovered by scaling according to the pre-determined angle quantisation intervals. These intervals may be fixed according to some pattern, or they could be obtained adaptively from the local image. Performing a complex correlation with the known sequence enables recovery of sub-degree dither angles embedded in 8-bit data. This occurs without recourse to the original image. This embedding process is additive in the angular domain and therefore multiplicative in the signal domain. Since the magnitude of the image vector is conserved, the image energy is largely unaltered by the embedding process. Colour watermarks can be treated as sets of ordered triples (RGB), as pixel pairs in spatial or YIQ/YCbCr colour domain, or in a transform domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
BPCS-Steganography is a steganographic method that hides secret messages in digital images. BPCS-Steganography extracts local regions of the image to embed using image segmentation based on a complexity measure that separates the image into ``informative'' and ``noise-like'' regions. The human visual system will be unable to perceive any difference by the replacement of noise regions with random binary data. This property allows us to embed secret data into such noise-like regions if the secret data is a random pattern. To avoid suspicion, an image should look innocent after embedding with secret information, not only visually, but also by analysis. A complexity histogram represents the relative frequency of occurrence of the various complexities. In previous work, we studied the complexity histogram of an image when embedded with secret data using BPCS-Steganography, and pointed out an anomaly in its shape. In this paper, we analyze other features of the image theoretically and practically. We consider the intensity of pixels in color components and luminance and analyze the shape of those histograms. As the result of the experiments, we show a more secure method to embed by BPCS-Steganography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the past few years a variety of techniques have emerged to hide specific information within multimedia data for copyright protection, tamper-proofing and secret communication. The schemes for information hiding that have been proposed so far used either digital signal processing software or hardware. So they inevitably have a problem in some applications like automatic copyright control system, which need fast data-extracting scheme. In this paper, we show that the newly proposed optical correlator-based information hiding system has an advantage in that sense. In this scheme it is possible to simultaneously extract all the data hidden in one stego image and furthermore it is also possible to simultaneously extract all the data hidden in several stego images using optical correlators such as matched spatial filter and joint transform correlator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main problems, which darkens the future of digital watermarking technologies, is the lack of detailed evaluation of existing marking schemes. This lack of benchmarking of current algorithms is blatant and confuses rights holders as well as software and hardware manufacturers and prevents them from using the solution appropriate to their needs. Indeed basing long-lived protection schemes on badly tested watermarking technology does not make sense. In this paper we will present the architecture of a public automated evaluation service we have developed for still images, sound and video. We will detail and justify our choice of evaluation profiles, that is the series of tests applied to different types of wa-termarking schemes. These evaluation profiles allow us to measure the reliability of a marking scheme to different levels from low to very high. Beside the known StirMark transformations, we will also detail new tests that will be included in this platform. One of them is intended to measure the real size of the key space. Indeed, if one is not careful, two different watermarking keys may produce interfering watermarks and as a consequence the actual space of keys is much smaller than it appears. Another set of tests is related to audio data and addresses the usual equalisation and normalisation but also time stretching, pitch shifting. Finally we propose a set of tests for fingerprinting applications. This includes: averaging of copies with different fingerprint, random ex-change of part between different copies and comparison between copies with selection of most/less frequently used position differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose for the first time a multiple description framework for oblivious watermarking. Parallels between multiple description source coding and the watermarking are drawn. An information theoretic definition of the problem is given. A spread-spectrum watermarking algorithm for DCT based multiple descriptions is described. Performance of the proposed framework for various attack channels such as additive white Gaussian noise, MPEG compression, and random bit error channels shows that the proposed method performs reasonably well compared to non-oblivious schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many blind watermarking proposals, the unwatermarked host data is viewed as unavoidable interference. Recently, however, it has been shown that blind watermarking corresponds to communication with side information (i.e., the host data) at the encoder. For a Gaussian host data and Gaussian channel, Costa showed that blind watermarking can theoretically eliminate all interference from the host data. Our previous work presented a practical blind watermarking scheme based on Costa's idea and called 'scalar Costa scheme' (SCS). SCS watermarking was analyzed theoretically and initial experimental results were presented. This paper discusses further practical implications when implementing SCS. We focus on the following three topics: (A) high-rate watermarking, (B) low-rate watermarking, and (C) restrictions due to finite codeword lengths. For (A), coded modulation is applied for a rate of 1 watermark bit per host-data element, which is interesting for information-hiding applications. For (B), low rates can be achieved either by repeating watermark bits or by projecting them in a random direction in signal space spread-transform SCS). We show that spread-transform SCS watermarking performs better than SCS watermarking with repetition coding. For (C), Gallager's random-coding exponent is used to analyze the influence of codeword length on SCS performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is essential in spread spectrum based watermark detection of a distorted image. Traditional registration approaches include the use of templates and feature matching between the distorted image and a reference image. However, neither of these techniques adequately addresses localized transformation. We propose a new registration scheme based on motion estimation between the distorted image and a reference copy. Complex wavelets provide a hierarchical framework for our motion estimation algorithm and radial basis functions provide the means to correct erroneous motion vectors. Experimental results show that our approach can estimate the distortion quite accurately and allow correct watermark detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new key distribution scheme for multimedia multicast by exploiting the characteristics of multimedia signals. First, a basic rekeying message construction is presented which can be used to distribute rekeying material to a group of users using number theoretic techniques. We then map the basic rekeying form to a logical tree structure to achieve logarithmic scalability and discuss several key updating strategies. Furthermore, we present a general key distribution scheme using data embedding for multimedia multicast. The performance and system feasibility of the proposed scheme is also analyzed. We also extend the key management scheme for multilayer multimedia applications in heterogeneous networks where group members have different quality requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common application of digital watermarking is to encode a small packet of information in an image, such as some form of identification that can be represented as a bit string. One class of digital watermarking techniques employs spread spectrum like methods where each bit is redundantly encoded throughout the image in order to mitigate bit errors. We typically require that all bits be recovered with high reliability to effectively read the watermark. In many watermarking applications, however, straightforward application of spread spectrum techniques is not enough for reliable watermark recovery. We therefore resort to additional techniques, such as error correction coding. As proposed by M. Kutter 1, M-ary modulation is one such technique for decreasing the probability of error in watermark recovery. It1 was shown that M-ary modulation techniques could provide performance improvement over binary modulation, but direct comparisons to systems using error correction codes were not made. In this paper we examine the comparative performance of watermarking systems using M-ary modulation and watermarking systems using binary modulation combined with various forms of error correction. We do so in a framework that addresses both computational complexity and performance issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new watermark estimation technique based on error probability that gives a universal scheme for watermark estimation. This method treats the bits of extracted watermark as random variable and calculates the probability of certainty whether the image contains a certain watermark or not. Calculated probability value is a universal measure including the discrepancy of the number of bits of watermarks and shows how reliable the watermark is contained in an image. The results of experiments using still images and movies demonstrate that the probabilities based method is robust and efficient to estimate the extracted watermarks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many of the proposed image watermarking algorithms belong to the class of spread-spectrum techniques, in which a pseudo- noise signal (itself a function of the mark) is added to the host signal, either in space or frequency domains. Under these approaches, several results and concepts from digital communication theory, such as M-ary digital modulation, channel coding and optimum detection, are easily applicable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The watermark signals are weakly inserted in images due to imperceptibility constraints which makes them prone to errors in the extraction stage. Although the error correcting codes can potentially improve their performance one must pay attention to the fact that the watermarking channel is in general very noisy. We have considered the trade-off of the BCH codes and repetition codes in various concatenation modes. At the higher rates that can be encountered in watermarking channels such as due to low-quality JPEG compression, codes like the BCH codes cease being useful. Repetition coding seems to be the last resort at these error rates of 25% and beyond. It has been observed that there is a zone of bit error rate where their concatenation turns out to be more useful. In fact the concatenation of repetition and BCH codes judiciously dimensioned, given the available number of insertion sites and the payload size, achieves a higher reliability level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work advocates the formulation of digital watermarking as a communication problem. We consider watermarking as communication with side information available for both encoder and decoder. A generalized watermarking channel is considered that includes geometrical attacks, fading and additive non-Gaussian noise. The optimal encoding/decoding scenario is discussed for the generalized watermarking channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantization Index Modulation (QIM) has been shown to be a promising method of digital watermarking. It has recently been argued that a version of QIM can provide the best information embedding performance possible in an information theoretic sense. This performance can be demonstrated via random coding using a sequence of vector quantizers of increasing block length, with both channel capacity and optimal rate-distortion performance being reached in the limit of infinite quantizer block length. For QIM, the rate-distortion performance of the component quantizers is unimportant. Because the quantized values are not digitally encoded in QIM, the number of reconstruction values in each quantizer is not a design constraint, as it is in the design of a conventional quantizer. The lack of a rate constraint in QIM suggests that quantizer design for QIM involves different condiderations than does quantizer design for rate-distortion performance. Lookabaugh has identified three types of advantages of vector quantizers vs. scalar quantizers. These advantages are called the space-filling, shape, and memory advantages. This paper investigates whether all of these advantages are useful in the context of QIM. QIM performance of various types of quantizers is presented and a heuristic sphere-packing argument is used to show that, in the case of high-resolution quantization and a Gaussian attack channel, only the space-filling advantage is necessary for nearly optimal QIM performance. This is important because relatively simple quantizers are available that do not provide shape and memory gain but do give a space-filling gain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.