In this paper, we propose a new theoretical framework for the data-hiding problem of digital and printed text documents. We explain how this problem can be seen as an instance of the well-known Gel'fand-Pinsker problem. The main idea for this interpretation is to consider a text character as a data structure consisting of multiple quantifiable features such as shape, position, orientation, size, color, etc. We also introduce <i>color quantization</i>, a new semi-fragile text data-hiding method that is fully automatable, has high information embedding rate, and can be applied to both digital and printed text documents. The main idea of this method is to quantize the color or luminance intensity of each character in such a manner that the human visual system is not able to distinguish between the original and quantized characters, but it can be easily performed by a specialized reader machine. We also describe halftone quantization, a related method that applies mainly to printed text documents. Since these methods may not be completely robust to printing and scanning, an outer coding layer is proposed to solve this issue. Finally, we describe a practical implementation of the color quantization method and present experimental results for comparison with other existing methods.
In this paper we consider the problem of document authentication in electronic and printed forms. We formulate this problem from the information-theoretic perspectives and present the joint source-channel coding theorems showing the performance limits in such protocols. We analyze the security of document authentication methods and present the optimal attacking strategies with corresponding complexity estimates that, contrarily to the existing studies, crucially rely on the information leaked by the authentication protocol. Finally, we present the results of experimental validation of the developed concept that justifies the practical efficiency of the elaborated framework.
In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.
This paper presents a new approach for the protection of travel documents. We propose a digital watermarking technique which requires for the verification process a low scanning resolution of at least 72 DPI. The approach is based on the wavelet decomposition and supports three key aspects: Message encoding is accomplished by iterative error correction codes satisfying a nearly optimal channel capacity. This encoding is based on specific modulation that requires for the implementation a significant lower complexity as the often applied M-array modulation. The watermark embedding is applied in the wavelet domain based on stochastic driven perceptional criteria for a good quality and invisibility. The watermarking process is considered as a communication process with side information. The approach utilizes two different watermarks, one for the channel state information estimation and one for the informative watermark which carries as a payload the hidden information.
In this paper a novel "Smart Media" concept for semantic-based multimedia security and management is proposed. This concept is based on interactive object segmentation (considered as side information in visual human-computer interface) with hidden object-based annotations. Information-theoretic formalism is introduced that considers the human-computer interface as a multiple access channel. We do not consider an image as a set of pixels but rather as a set of annotated regions that correspond to objects or their parts, where these objects are associated with some hidden descriptive text
about their features. The presented approach for "semantic" segmentation is addressed by means of the human-computer interface that allows a user to easily incorporate information related to image objects and to store them in a secure way. Since each selected image object carries its own embedded description, this makes it self-containing and formally independent from the particular image format used for storage in image databases. The proposed object-based hidden
descriptors are invariant to changes of image filename or/and image headers, and are resistant to object cropping/insertion operations, which are usual in multimedia processing and management. This is well harmonized with the "Smart Media" concept where the image contains additional information about itself, and where this information is
securely integrated inside the image while remaining perceptually invisible.
Novel functional possibilities, provided by recent data hiding technologies, carry out the danger of uncontrolled (unauthorized) and unlimited information exchange that might be used by people with unfriendly interests. The multimedia industry as well as the research community recognize the urgent necessity for network security and copyright protection, or rather the lack of adequate law for digital multimedia protection. This paper advocates the need for detecting hidden data in digital and analog media as well as in electronic transmissions, and for attempting to identify the underlying hidden data. Solving this problem calls for the development of an architecture for blind stochastic hidden data detection in order to prevent unauthorized data exchange. The proposed architecture is called StegoWall; its key aspects are the solid investigation, the deep understanding, and the prediction of possible tendencies in the development of advanced data hiding technologies. The basic idea of our complex approach is to exploit all information about hidden data statistics to perform its detection based on a stochastic framework. The StegoWall system will be used for four main applications: robust watermarking, secret communications, integrity control and tamper proofing, and internet/network security.
This paper presents a new attack, called the watermark template attach, for watermarked images. In contrast to the Stirmark benchmark, this attack does not severely reduce the quality of the image. This attack maintains, therefore, the commercial value of the watermarked image. In contrast to previous approaches, it is not the aim of the attack to change the statistics of embedded watermarks fooling the detection process but to utilize specific concepts that have been recently developed for more robust watermarking schemes. The attack estimates the corresponding template points in the FFT domain and then removes them using local interpolation. We demonstrate the effectiveness of the attack showing different test cases that have been watermarked with commercial available watermark products. The approach presented is not limited to the FFT domain. Other transformation domains may be also exploited by very similar variants of the described attack.
The reliable detection of objects of interest on inhomogeneous background base don image data is a typical detection and recognition problem in many practical applications. In this paper, an algorithm of local object detection is described in the context of change detection based on the difference between two images obtained from the same scene. The proposed detection method using multi-scale relevance function is a model based-approach which takes into account the planar shape model of objects of interest and the regression model of intensity function with respect to objects and background. The image relevance function is an image local operator whose local extrema indicate on the locations of objects or their salient parts termed as primitive patterns. The image fragment centered at the maximum point of the relevance function represents a region of attention. A structure-adaptive binarizaiton is performed within each region of attention by using variable threshold. The comparative testing of the proposed algorithm and the known techniques have shown better performance of the relevance function approach at the approximately same dely of detection.
A new fast effective approach for impulsive noise suppression in digitized aerophotographic images is presented in this paper. The filtering scheme consists of the global histogram analysis part to detect impulse corrupted pixels and the part of local adaptive interpolation of the intensity function of identified pixels by spatial modified order statistic value computed around these pixels. The proposed technique is characterized by its simplicity and better performing of the impulsive noise suppression while preserving fine details and edges of the image, especially where there is a high probability of the impulse occurrence. The advantages of developed approach in comparison with known adaptive identification filters are demonstrated by simulation results.
In this article a structure-adaptive approach to the evaluation of image local properties for adaptive filtering is described. The adaptive procedure is based on selection of the most homogeneous neighborhood region from several possible structuring regions by the principle of maximum posterior probability. Then, an optimal evaluation of the pixel value at this point is performed involving pixels from the determined neighborhood region and the symmetric structuring region. The trimmed mean filters are used for the robust evaluation of local properties during estimation of object and background intensities when the supposed additive noise has a mixed conditional distribution, e.g., normal distribution with outliers. A time-efficient scheme for fast implementation of this method is proposed as well.
This paper introduces an approach to the regularization of the iterative restoration methods based on the median filtering. The comparative analysis of low-pass and median regularization is performed. The median filtering is shown to be more efficient as the regularization in the case of noise with mixed distribution. The nonlinearity of the iterative method is provided by the constrain on non-negativity that makes possible to solve the problem of band-limited extrapolation. The use of median regularization does not require to chose the regularization parameter in contrast to Tikhonov regularization. However, the window size is to be chosen according to the noise level and could be considered as a parameter for the adaptive regularization to preserve edges according to the masking effect of human vision system.
This paper is concerned with algorithms for removal additive noise from images. The proposed ((alpha) -(beta) )-trimmed mean filtering is suitable for application to real images corrupted by Gaussian, uniform and impulsive noise. The developed technique is a generalization of (alpha) -trimmed mean filter and have the same basic properties as rank- order filters. The actual performance of proposed technique was compared with that of average, median and midpoint filters and evaluate on noisy images by the error of restoration. The illustrative examples are given.