In this paper, the problem of multimedia object identification in channels with asymmetric desynchronizations
is studied. First, we analyze the achievable rates attainable in such protocols within digital communication
framework. Secondly, we investigate the impact of the fingerprint length on the error performance of these
protocols relaxing the capacity achieving argument and formulating the identification problem as multi class
classification.
KEYWORDS: Signal to noise ratio, Databases, Binary data, System identification, Lead, Polarization, Multimedia, Systems modeling, Telecommunications, Reliability
In this paper, we consider an information-theoretic formulation of the content identification under search complexity
constrain. The proposed framework is based on soft fingerprinting, i.e., joint consideration of sign and
magnitude of fingerprint coefficients. The fingerprint magnitude is analyzed in the scope of communications with
side information that results in channel decomposition, where all bits of fingerprints are classified to be communicated
via several channels with distinctive characteristics. We demonstrate that under certain conditions
the channels with low identification capacity can be neglected without considerable rate loss. This is a basis for
the analysis of fast identification techniques trading-off theoretical performance in terms of achievable rate and
search complexity.
KEYWORDS: Signal to noise ratio, Binary data, Lead, Databases, Data hiding, Computer programming, Reliability, Data modeling, Berkelium, Computer security
In many problems such as biometrics, multimedia search, retrieval, recommendation systems requiring privacypreserving
similarity computations and identification, some binary features are stored in the public domain or
outsourced to third parties that might raise certain privacy concerns about the original data. To avoid this
privacy leak, privacy protection is used. In most cases, privacy protection is uniformly applied to all binary
features resulting in data degradation and corresponding loss of performance. To avoid this undesirable effect
we propose a new privacy amplification technique that is based on data hiding principles and benefits from side
information about bit reliability a.k.a. soft fingerprinting. In this paper, we investigate the identification-rate vs
privacy-leak trade-off. The analysis is performed for the case of a perfect match between side information shared
between the encoder and decoder as well as for the case of partial side information.
KEYWORDS: Data communications, Binary data, Data modeling, Reliability, Telecommunications, Signal to noise ratio, Systems modeling, Databases, Data processing, System identification
This paper presents recent advances in the identification problem taking into account the accuracy, complexity
and privacy leak of different decoding algorithms. Using a model of different actors from literature, we show
that it is possible to use more accurate decoding algorithms using reliability information without increasing the
privacy leak relative to algorithms that only use binary information. Existing algorithms from literature have
been modified to take advantage of reliability information, and we show that a proposed branch-and-bound
algorithm can outperform existing work, including the enhanced variants.
KEYWORDS: System identification, Reliability, Error analysis, Databases, Binary data, Signal to noise ratio, Computing systems, Forensic science, Information security, Biometrics
In this paper, we consider a low complexity identification system for highly distorted images. The performance of the
proposed identification system is analyzed based on the average probability of error. An expected improvement of the
performance is obtained combining random projection transform and concept of bit reliability. Simulations based on
synthetic and real data confirm the efficiency of the proposed approach.
In this paper, we consider an item authentication using unclonable forensic features of item surface microstructure images (a.k.a. fingerprints). The advocated authentication approach is based on the source coding jointly with the random projections. The source coding ensures the source reconstruction at the decoder based on the authentication data. The random projections are used to cope with the security, privacy, robustness and complexity issues. Finally, the authentication is accomplished as a binary hypothesis testing for both direct and random projections domains. The asymptotic performance approximation is derived and compared with the exact solutions.
In this paper, we consider some basic concepts behind the design of existing robust perceptual hashing techniques for content identification. We show the limits of robust hashing from the communication perspectives as well as propose an approach that is able to overcome these shortcomings in certain setups. The consideration is based on both achievable rate and probability of error. We use the fact that most robust hashing algorithms are
based on dimensionality reduction using random projections and quantization. Therefore, we demonstrate the corresponding achievable rate and probability of error based on random projections and compare with the results for the direct domain. The effect of dimensionality reduction is studied and the corresponding approximations are provided based on the Johnson-Lindenstrauss lemma. Side-information assisted robust perceptual hashing is proposed as a solution to the above shortcomings.
This paper introduces an identification framework for random microstructures of material surfaces. These microstructures
represent a kind of unique fingerprints that can be used to track and trace an item as well as for
anti-counterfeiting. We first consider the architecture for mobile phone-based item identification and then introduce
a practical identification algorithm enabling fast searching in large databases. The proposed algorithm is
based on reference list decoding. The link to digital communications and robust perceptual hashing is shown. We
consider a practical construction of reference list decoding, which comprizes computational complexity, security,
memory storage and performance requirements. The efficiency of the proposed algorithm is demonstrated on
experimental data obtained from natural paper surfaces.
KEYWORDS: Signal to noise ratio, Databases, Data storage, Data communications, Composites, Optical spheres, Computer security, Information security, Error analysis, Computer simulations
In this paper we advocate a new approach to item identification based on physical unclonable features. Being
unique characteristics of an item, these features represent a kind of unstructured random codebook that links
the identification problem to digital communications via composite hypothesis testing. Despite the obvious
similarity, this problem is significantly different in that a security constraint prohibits the disclosure of the entire
codebook at the identification stage. Besides this, complexity, memory storage and universality constraints
should be taken into account for databases with several hundred millions entries. Therefore, we attempt to find
a trade-off between performance, security, memory storage and universality constraints. A practical suboptimal
method is considered based on our reference list decoding (RLD) framework. Simulation results are presented
to demonstrate and support the theoretical findings.
In this paper we considered the problem of security analysis of robust perceptual hashing in authentication
application. The main goal of our analysis was to estimate the amount of trial efforts of the attacker, who is
acting within the Kerckhoffs security principle, to reveal a secret key. For this purpose, we proposed to use
Shannon equivocation that provides an estimate of complexity of the key search performed based on all available
prior information and presented its application to security evaluation of particular robust perceptual hashing
algorithms.
KEYWORDS: Biometrics, Data hiding, Computer security, Data storage, Mobile devices, Visualization, Digital watermarking, Printing, Cell phones, Binary data
We consider the problem of authentication of biometric
identification documents via mobile devices such as mobile phones
or personal digital assistants (PDAs). We assume that the biometric
identification document holds biometric data (e.g., face or fingerprint)
in the form of an image and personal data in the form of text,
both being printed directly onto the identification document. The proposed
solution makes use of digital data hiding in order to crossstore
the biometric data inside the personal data and vice versa.
Moreover, a theoretical framework is presented that should enable
analysis and guide the design of future authentication systems
based on this approach. In particular, we advocate the separation
approach, which uses robust visual hashing techniques in order to
match the information rates of biometric and personal data to the
rates offered by current image and text data hiding technologies. We
also describe practical schemes for robust visual hashing and digital
data hiding that can be used as building blocks for the proposed
authentication system. The obtained experimental results show that
the proposed system constitutes a viable and practical solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.