In this paper we analyze the performance limits of multimodal biometric identification systems in the multiple
hypothesis testing formulation. For the sake of tractability, we approximate the performance of the actual system
by a set of pairwise binary tests. We point out that the attainable error exponent that can be achieved for such
an approximation is limited by the worst pairwise Chernoff distance between alternative hypothesis prior models.
We consider impact of the inter-modal dependencies on the attainable performance measure and demonstrate
that, contrarily to the binary multimodal hypothesis testing framework, an expected performance gain from
fusion of independent modalities does not any more play the role of lower bound on the gain one can expect
from multimodal fusion.
This paper introduces an identification framework for random microstructures of material surfaces. These microstructures
represent a kind of unique fingerprints that can be used to track and trace an item as well as for
anti-counterfeiting. We first consider the architecture for mobile phone-based item identification and then introduce
a practical identification algorithm enabling fast searching in large databases. The proposed algorithm is
based on reference list decoding. The link to digital communications and robust perceptual hashing is shown. We
consider a practical construction of reference list decoding, which comprizes computational complexity, security,
memory storage and performance requirements. The efficiency of the proposed algorithm is demonstrated on
experimental data obtained from natural paper surfaces.
We consider the problem of authentication of biometric
identification documents via mobile devices such as mobile phones
or personal digital assistants (PDAs). We assume that the biometric
identification document holds biometric data (e.g., face or fingerprint)
in the form of an image and personal data in the form of text,
both being printed directly onto the identification document. The proposed
solution makes use of digital data hiding in order to crossstore
the biometric data inside the personal data and vice versa.
Moreover, a theoretical framework is presented that should enable
analysis and guide the design of future authentication systems
based on this approach. In particular, we advocate the separation
approach, which uses robust visual hashing techniques in order to
match the information rates of biometric and personal data to the
rates offered by current image and text data hiding technologies. We
also describe practical schemes for robust visual hashing and digital
data hiding that can be used as building blocks for the proposed
authentication system. The obtained experimental results show that
the proposed system constitutes a viable and practical solution.
In this paper, we deal with the problem of authentication and tamper-proofing of text documents that can be distributed in electronic or printed forms. We advocate the combination of robust text hashing and text data-hiding technologies as an efficient solution to this problem. First, we consider the problem of text data-hiding in the scope of the Gel'fand-Pinsker data-hiding framework. For illustration, two modern text data-hiding methods, namely color index modulation (CIM) and location index modulation (LIM), are explained. Second, we study two approaches to robust text hashing that are well suited for the considered problem. In particular, both approaches are compatible with CIM and LIM. The first approach makes use of optical character recognition (OCR) and a classical cryptographic message authentication code (MAC). The second approach is new and can be used in some scenarios where OCR does not produce consistent results. The experimental work compares both approaches and shows their robustness against typical intentional/unintentional document distortions including electronic format conversion, printing, scanning, photocopying, and faxing.
In this paper, we propose a new theoretical framework for the data-hiding problem of digital and printed text documents. We explain how this problem can be seen as an instance of the well-known Gel'fand-Pinsker problem. The main idea for this interpretation is to consider a text character as a data structure consisting of multiple quantifiable features such as shape, position, orientation, size, color, etc. We also introduce color quantization, a new semi-fragile text data-hiding method that is fully automatable, has high information embedding rate, and can be applied to both digital and printed text documents. The main idea of this method is to quantize the color or luminance intensity of each character in such a manner that the human visual system is not able to distinguish between the original and quantized characters, but it can be easily performed by a specialized reader machine. We also describe halftone quantization, a related method that applies mainly to printed text documents. Since these methods may not be completely robust to printing and scanning, an outer coding layer is proposed to solve this issue. Finally, we describe a practical implementation of the color quantization method and present experimental results for comparison with other existing methods.
In this paper we consider the problem of document authentication in electronic and printed forms. We formulate this problem from the information-theoretic perspectives and present the joint source-channel coding theorems showing the performance limits in such protocols. We analyze the security of document authentication methods and present the optimal attacking strategies with corresponding complexity estimates that, contrarily to the existing studies, crucially rely on the information leaked by the authentication protocol. Finally, we present the results of experimental validation of the developed concept that justifies the practical efficiency of the elaborated framework.
In this paper, we deal with the design of high rate multilevel two-dimensional (2D) bar codes for the print-and-scan channel. Firstly, we derive an upper bound on the maximum achievable rate of these codes by studying an inter-symbol interference (ISI) free, perfectly synchronized, and noiseless print-and-scan channel, in which the printer device uses halftoning in order to simulate multiple gray levels. Secondly, we present a new model of the print-and-scan channel specifically adapted to the multilevel 2D bar code application. This model, inspired by our experimental work, assumes no ISI and perfect synchronization, but independence between the channel input and the noise is not required. For completeness, we briefly review three state-of-the-art coded modulation techniques for the additive white Gaussian noise channel in the high signal-to-noise ratio regime, namely, multilevel coding with multistage decoding (MLC/MSD), multilevel coding with parallel independent decoding, and bit-interleaved coded modulation. We then derive the information capacity of our print-and-scan channel model and extend the theory of MLC/MSD to this channel. Finally, we present experimental results confirming the validity of our channel model, and showing that multilevel 2D bar codes using MLC/MSD can reliably achieve the high rate storage requirements of many multimedia security and data management applications.