YASS is a steganographic algorithm for digital images that hides messages robustly in a key-dependent transform
domain so that the stego image can be subsequently compressed and distributed as JPEG. Given the fact that
state-of-the-art blind steganalysis methods of 2007, when YASS was proposed, were unable to reliably detect
YASS, in this paper we steganalyze YASS using several recently proposed general-purpose steganalysis feature
sets. The focus is on blind attacks that do not capitalize on any weakness of a specific implementation of the
embedding algorithm. We demonstrate experimentally that twelve different settings of YASS can be reliably
detected even for small embedding rates and in small images. Since none of the steganalysis feature sets is in
any way targeted to the embedding of YASS, future modifications of YASS will likely be detectable by them as
The square root law holds that acceptable embedding rate is sublinear in the cover size, specifically O(square root of n), in
order to prevent detection as the warden's data and thus detector power increases.
One way to transcend this law, at least in the i.i.d.case, is to restrict the cover to a chosen subset whose
distribution is close to that of altered data. Embedding is then performed on this subset; this replaces the
problem of finding a small enough subset to evade detection with the problem of finding a large enough subset
that possesses a desired type distribution.
We show that one can find such a subset of size asymptotically proportional to n rather than
the square root of n. This
works in the case of both replacement and tampering: Even if the distribution of tampered data depends on
the distribution of cover data, one can find a fixed point in the probability simplex such that cover data of that
distribution yields stego data of the same distribution.
While the transmission of a subset is not allowed, this is no impediment: wet paper codes can be used, else in
the worst case a maximal desirable subset can be computed from the cover by both sender and receiver without
communication of side information.
Steganalysis is used to detect hidden content in innocuous images. Many successful steganalysis algorithms use
a large number of features relative to the size of the training set and suffer from a "curse of dimensionality":
large number of feature values relative to training data size. High dimensionality of the feature space can reduce
classification accuracy, obscure important features for classification, and increase computational complexity. This
paper presents a filter-type feature selection algorithm that selects reduced feature sets using the Mahalanobis
distance measure, and develops classifiers from the sets. The experiment is applied to a well-known JPEG
steganalyzer, and shows that using our approach, reduced-feature steganalyzers can be obtained that perform as
well as the original steganalyzer. The steganalyzer is that of Pevn´y et al. (SPIE, 2007) that combines DCT-based
feature values and calibrated Markov features. Five embedding algorithms are used. Our results demonstrate
that as few as 10-60 features at various levels of embedding can be used to create a classifier that gives comparable
results to the full suite of 274 features.
In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome
coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate
rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact
of making an embedding change at that element (single-letter distortion). The problem is to embed a given
payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of
matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past.
Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance
arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal
binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory
requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners,
we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive
experimental results for a large set of relative payloads and for different distortion profiles, including the wet
While historically we may have been overly trusting of photographs, in recent years there has been a backlash
of sorts and the authenticity of photographs is now routinely questioned. Because these judgments are often
made by eye, we wondered how reliable the human visual system is in detecting discrepancies that might arise
from photo tampering. We show that the visual system is remarkably inept at detecting simple geometric
inconsistencies in shadows, reflections, and perspective distortions. We also describe computational methods
that can be applied to detect the inconsistencies that seem to elude the human visual system.
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image
forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic
aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing
a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration
is investigated in a large-scale. The reported results point to general difficulties that have to be considered in
real world investigations.
Sensor fingerprint is a unique noise-like pattern caused by slightly varying pixel dimensions and inhomogeneity of the
silicon wafer from which the sensor is made. The fingerprint can be used to prove that an image came from a specific
digital camera. The presence of a camera fingerprint in an image is usually established using a detector that evaluates
cross-correlation between the fingerprint and image noise. The complexity of the detector is thus proportional to the
number of pixels in the image. Although computing the detector statistic for a few megapixel image takes several
seconds on a single-processor PC, the processing time becomes impractically large if a sizeable database of camera
fingerprints needs to be searched through. In this paper, we present a fast searching algorithm that utilizes special
"fingerprint digests" and sparse data structures to address several tasks that forensic analysts will find useful when
deploying camera identification from fingerprints in practice. In particular, we develop fast algorithms for finding if a
given fingerprint already resides in the database and for determining whether a given image was taken by a camera
whose fingerprint is in the database.
Several promising techniques have been recently proposed to bind an image or video to its source acquisition
device. These techniques have been intensively studied to address performance issues, but the computational
efficiency aspect has not been given due consideration. Considering very large databases, in this paper, we
focus on the efficiency of the sensor fingerprint based source device identification technique.1 We propose a
novel scheme based on tree structured vector quantization that offers logarithmic improvements in the search
complexity as compared to conventional approach. To demonstrate the effectiveness of the proposed approach
several experiments are conducted. Our results show that with the proposed scheme major improvement in
search time can be achieved.
In this paper we concentrate on robust image watermarking (i.e. capable of resisting common signal processing
operations and intentional attacks to destroy the watermark) based on image features. Kutter et al.7 motivated
that well chosen image features survive admissible image distortions and hence can benefit the watermarking
process. These image features are used as location references for the region in which the watermark is embedded.
To realize the latter, we make use of previous work16 where a ring-shaped region, centered around an image
feature is determined for watermark embedding. We propose to choose a specific sequence of image features
according to strict criteria so that the image features have large distance to other chosen image features so
that the ring shaped embedding regions do not overlap. Nevertheless, such a setup remains prone to insertion,
deletion and substitution errors. Therefore we applied a two-step coding scheme similar to the one employed by
Coumou and Sharma4 for speech watermarking. Our contribution here lies in extending Coumou and Sharma's
one dimensional scheme to the two dimensional setup that is associated with our watermarking technique.
The two-step coding scheme concatenates an outer Reed-Solomon error-correction code with an inner, blind,
Semi-fragile video watermarking is a technology for detecting manipulations. It provides robustness against
content-preserving manipulations as well as sensitivity to
content-changing manipulations. To achieve this,
robust content-describing features are applied. We use the SIFT keypoint detection as feature for our semifragile
video watermarking scheme introduced in this work. SIFT (Scale Invariant Feature Transformation)
detects points invariant to image scale and rotation and can be used for object matching after changing the 3D
viewpoint, addition of noise and modifications in illumination. With the detected feature points we generate
an authentication message, which is embedded with a robust video watermark. In the verification process we
introduce a temporal filtering approach to reduce the distortions caused by content-preserving manipulations.
We present experimental results demonstrating the robustness and sensitivity of our scheme.
We investigate the use of reversible pre-embedding transformations to enhance reversible watermarking schemes for
images. We are motivated by the observation that a (non-reversible) sorting transformation dramatically increases the
quality of the embedding when combined with a reversible watermark based on a generalized integer transform. In one
example we obtain a PSNR gain of 23 dB using the pre-sorting approach over the regular embedding method for the
same payload size. This may provide opportunities for increasing the embedding capacity by trading off the quality for a
larger payload size. We test several reversible sorting approaches but these do not provide us any gain in the
watermarking capacity or quality.
In this paper, we consider a forensic multimodal authentication framework based on binary hypothesis testing in
random projections domain. We formulate a generic authentication problem taking into account several possible
counterfeiting strategies. The authentication performance analysis is accomplished in the scope of Neyman-
Pearson framework as well as for an average probability of error for both direct and random projections domains.
Worst-case attack/acquisition channel leading to the worst performance loss in terms of Bhattacharyya distance
reduction is presented. The obtained theoretical findings are also confirmed by results of computer simulation.
We describe how to exploit the formation and storage of an embedded image thumbnail for image authentication.
The creation of a thumbnail is modeled with a series of filtering operations, contrast adjustment, and compression.
We automatically estimate these model parameters and show that these parameters differ significantly between
camera manufacturers and photo-editing software. We also describe how this signature can be combined with
encoding information from the underlying full resolution image to further refine the signature's distinctiveness.
Wide availability of cheap high-quality printing techniques make document forgery an easy task that can easily be done
by most people using standard computer and printing hardware. To prevent the use of color laser printers or color copiers
for counterfeiting e.g. money or other valuable documents, many of these machines print Counterfeit Protection System
(CPS) codes on the page. These small yellow dots encode information about the specific printer and allow the questioned
document examiner in cooperation with the manufacturers to track down the printer that was used to generate the document.
However, the access to the methods to decode the tracking dots pattern is restricted. The exact decoding of a tracking pattern
is often not necessary, as tracking the pattern down to the printer class may be enough. In this paper we present a method
that detects what CPS pattern class was used in a given document. This can be used to specify the printer class that the
document was printed on. Evaluation proved an accuracy of up to 91%.
Digital watermarking has become a widely used security technology in the domain of digital rights management
and copyright protection as well as in other applications. In this work, we show recent results regarding a
particular security attack: Embedding a new message in a previously watermarked cover using the same key as
the original message.
This re-embedding can be the consequence of the absence of truly asymmetric watermarking solutions, especially
if the watermark is to be detected in public. In public detection scenarios, every detector needs the same
key the embedder used to watermark the cover. With knowledge of the embedding algorithm, everybody who is
able to detect the message can also maliciously embed a new message with the same key over the old one. This
scenario is relevant in the case that an attacker intends to counterfeit a copyright notice, transaction ID or to
change an embedded authentication code.
This work presents experimental results on mechanisms for identifying such multiple embeddings in a spreadspectrum
patchwork audio watermarking approach. We demonstrate that under certain circumstances such
multiple embedding can be detected by watermarking-forensics.
This paper deals with the security of the robust zero-bit watermarking technique "Broken Arrows" (BA),1
which was invented and tested for the international challenge
BOWS-2.2 The results of the first episode of the
challenge showed that BA is very robust and we proposed last year an enhancement called "Averaging Wavelet
Coefficients" (AWC),3 which further strengthens the robustness against the worst attack disclosed during this
BOWS-2's first episode.4 However, in the second and third episodes of the challenge, during which the pirates
could observe plenty of pictures watermarked with the same secret key, security flaws have been revealed and
discussed.5 Here we propose counterattacks to these security flaws, investigating BA and its variant AWC. We
propose two counterattack directions: to use the embedding technique AWC instead of BA, and to regulate
the system parameters to lighten the watermarking embedding footprint. We also discuss these directions in
the context of traitor tracing.6 Experimental results show that following these recommendations is sufficient to
counter these attacks.
Current image re-sampling detectors can reliably detect re-sampling in JPEG images only up to a Quality Factor (QF) of
95 or higher. At lower QFs, periodic JPEG blocking artifacts interfere with periodic patterns of re-sampling. We add a
controlled amount of noise to the image before the re-sampling detection step. Adding noise suppresses the JPEG
artifacts while the periodic patterns due to re-sampling are partially retained. JPEG images of QF range 75-90 are
considered. Gaussian/Uniform noise in the range of 28-24 dB is added to the image and the images thus formed are
passed to the re-sampling detector. The detector outputs are averaged to get a final output from which re-sampling can
be detected even at lower QFs.
We consider two re-sampling detectors - one proposed by Poposcu and Farid , which works well on uncompressed
and mildly compressed JPEG images and the other by Gallagher , which is robust on JPEG images but can detect only
scaled images. For multiple re-sampling operations (rotation, scaling, etc) we show that the order of re-sampling matters.
If the final operation is up-scaling, it can still be detected even at very low QFs.
Re-quantization commonly occurs when digital multimedia content is being tampered with. Detecting requantization
is therefore an important element for assessing the authenticity of digital multimedia content.
In this paper, we introduce three features based on the observation that re-quantization (i) induces periodic
artifacts and (ii) introduces discontinuities in the signal histogram. After validating the discriminative potential
of these features with synthetic signals, we propose a system to detect JPEG re-compression. Both linear (FLD)
and non-linear (SVM) classifications are investigated. Experimental results clearly demonstrate the ability of the
proposed features to detect JPEG re-compression, as well as their competitiveness compared to prior approaches
to achieve the same goal.
MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and
file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate
since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by
pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The
methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector
machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified
discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods.
To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.
This paper provides an information theoretical description of biometric systems at the system level. A number of
basic models to characterize performance of biometric systems are presented. All models compare performance of
an automatic biometric recognition system against performance of an ideal biometric system that knows correct
decisions. The correct decision can be visualized as an input to a new decision system, and the decision by an
automatic recognition system is the output of this decision system. The problem of performance evaluation for
a biometric recognition system is formulated as (1) the problem of finding the maximum information that the
output of the system has about the input, and (2) the problem of finding the maximum distortion that the output
can experience with respect to the input of the system to guarantee a bounded average probability of recognition
error. The first formulation brings us to evaluation of capacity of a binary asymmetric and M-ary channels. The
second formulation falls under the scope of rate-distortion theory. We further describe the problem of physical
signature authentication used to authenticate a biometric acquisition device and state the problem of secured
biometric authentication as the problem of joint biometric and physical signature authentication. One novelty
of this work is in restating the problem of secured biometric authentication as the problem of finding capacity
and rate-distortion curve for a secured biometric authentication system. Another novelty is in application of
transductive methods from statistical learning theory to estimate the conditional error probabilities of the system.
This set of parameters is used to optimize the system performance.
Security of biometric templates stored in a system is important because a stolen template can compromise
system security as well as user privacy. Therefore, a number of secure biometrics schemes have been proposed
that facilitate matching of feature templates without the need for a stored biometric sample. However, most of
these schemes suffer from poor matching performance owing to the difficulty of designing biometric features that
remain robust over repeated biometric measurements. This paper describes a scheme to extract binary features
from fingerprints using minutia points and fingerprint ridges. The features are amenable to direct matching
based on binary Hamming distance, but are especially suitable for use in secure biometric cryptosystems that
use standard error correcting codes. Given all binary features, a method for retaining only the most discriminable
features is presented which improves the Genuine Accept Rate (GAR) from 82% to 90% at a False Accept Rate
(FAR) of 0.1% on a well-known public database. Additionally, incorporating singular points such as a core or
delta feature is shown to improve the matching tradeoff.
One of the critical steps in designing a secure biometric system is protecting the templates of the users that
are stored either in a central database or on smart cards. If a biometric template is compromised, it leads to
serious security and privacy threats because unlike passwords, it is not possible for a legitimate user to revoke
his biometric identifiers and switch to another set of uncompromised identifiers. One methodology for biometric
template protection is the template transformation approach, where the template, consisting of the features
extracted from the biometric trait, is transformed using parameters derived from a user specific password or
key. Only the transformed template is stored and matching is performed directly in the transformed domain.
In this paper, we formally investigate the security strength of template transformation techniques and define
six metrics that facilitate a holistic security evaluation. Furthermore, we analyze the security of two wellknown
template transformation techniques, namely, Biohashing and cancelable fingerprint templates based on
the proposed metrics. Our analysis indicates that both these schemes are vulnerable to intrusion and linkage
attacks because it is relatively easy to obtain either a close approximation of the original template (Biohashing)
or a pre-image of the transformed template (cancelable fingerprints). We argue that the security strength
of template transformation techniques must consider also consider the computational complexity of obtaining a
complete pre-image of the transformed template in addition to the complexity of recovering the original biometric
In 1999 Juels and Wattenberg introduced the fuzzy commitment scheme. Fuzzy commitment is a particular realization of a
binary biometric secrecy system with a chosen secret key. Three cases of biometric sources are considered, i.e. memoryless
and totally-symmetric biometric sources, memoryless and
input-symmetric biometric sources, and memoryless biometric
sources. It is shown that fuzzy commitment is only optimal for memoryless totally-symmetric biometric sources and only
at the maximum secret-key rate. Moreover, it is demonstrated that for memoryless biometric sources, which are not inputsymmetric,
the fuzzy commitment scheme leaks information on both the secret key and the biometric data. Finally, a
number of coding techniques are investigated for the case of
totally-symmetric memoryless biometric data statistics.
Biohashing algorithms map biometric features randomly onto binary strings with user-specific tokenized random
numbers. In order to protect biometric data, these binary strings, the Biohashes, are not allowed to reveal much
information about the original biometric features. In the paper we analyse two Biohashing algorithms using scalar
randomization and random projection respectively. With scalar randomization, multiple bits can be extracted
from a single element in a feature vector. The average information rate of Biohashes is about 0.72. However,
Biohashes expose the statistic information about biometric feature, which can be used to estimate the original
feature. Using random projection method, a feature vector in n dimensional space can be converted into binary
strings with length of m (m ≤ n). Any feature vector can be converted into 2m different Biohashes. The random
projection can roughly preserve Hamming distance between Biohashes. Moreover, the direction information
about the original vector can be retrieved with Biohashes and the corresponding random vectors used in the
projection. Although Biohashing can efficiently randomize biometric features, combining more Biohashes of the
same user can leak essential information about the original feature.
A robust fingerprint minutiae hash generation algorithm is proposed in this paper to extract a binary secure hash bit
string from each fingerprint minutia and its vicinity. First, ordering of minutiae points and rotation and translation
geometric alignment of each minutiae vicinity are achieved; second, the ordered and aligned points are diversified by
offsetting their coordinates and angles in a random way; and finally, an ordered binary minutia hash bit string is
extracted by quantizing the coordinates and angle values of the points in the diversified minutiae vicinity. The generated
hashes from all minutiae vicinities in the original template form a protected template, which can be used to represent the
original minutia template for identity verification. Experiments show desirable comparison performance (average Equal
Error Rate 0.0233 using the first two samples of each finger in FVC2002DB2_A) by the proposed algorithm. The
proposed biometric reference requires less template storage capacity compared to their unprotected counterparts. A
security analysis is also given for the proposed algorithm.
In camera identification using sensor noise, the camera that took a given image can be determined with high certainty
by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal
counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto
an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim.
We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available
to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were
used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled
"triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of
conditions. This test is then extended to the case when none of the images that the attacker used to create the fake
fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's
performance experimentally and investigate its limitations. The conclusion that can be made from this study is
that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously
Several methods exist for printer identification from a printed document. We have developed a system that
performs printer identification using intrinsic signatures of the printers. Because an intrinsic signature is tied
directly to the electromechanical properties of the printer, it is difficult to forge or remove. There are many
instances where existance of the intrinsic signature in the printed document is undesireable. In this work we
explore texture based attacks on intrinsic printer identification from text documents. An updated intrinsic printer
identification system is presented that merges both texture and banding features. It is shown that this system
is scable and robust against several types of attacks that one may use in an attempt to obscure the intrinsic
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function
(CSF) to control the force of insertion of the hidden information in a perceptually uniform color space.
The CIELAB color space is used as it is well suited for steganographic applications because any change in the
CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic
schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the
contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect
to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the
steganographic scheme with respect to the color perception of the HVS is done using standard methods such as
the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against
Image encryption process is jointed with reversible data hiding in this paper, where the data to be hided are modulated
by different secret keys selected for encryption. To extract the hided data from the cipher-text, the different tentative
decrypted results are tested against typical random distribution in both spatial and frequency domain and the goodnessof-
fit degrees are compared to extract one hided bit. The encryption based data hiding process is inherently reversible.
Experiments demonstrate the proposed scheme's effectiveness on natural and textural images, both in gray-level and
In the paper we present a watermarking scheme developed to meet the specific requirements of audio annotation
watermarking robust against DA/AD conversion (watermark detection after playback by loudspeaker and recording with
a microphone). Additionally the described approach tries to achieve a comparably low detection complexity, so it could
be embedded in the near future in low-end devices (e.g. mobile phones or other portable devices). We assume in the field
of annotation watermarking that there is no specific motivation for attackers to the developed scheme.
The basic idea for the watermark generation and embedding scheme is to combine traditional frequency domain spread
spectrum watermarking with psychoacoustic modeling to guarantee transparency and alphabet substitution to improve
the robustness. The synchronization and extraction scheme is designed to be much less computational complex than the
embedder. The performance of the scheme is evaluated in the aspects of transparency, robustness, complexity and
capacity. The tests reveals that 44% out of 375 tested audio files pass the simulation test for robustness, while the most
appropriate category shows even 100% robustness. Additionally the introduced prototype shows an averge transparency
of -1.69 in SDG, while at the same time having a capacity satisfactory to the chosen application scenario.
Though the current state of the art of image forensics permits to acquire very interesting information about
image history, all the instruments developed so far focus on the analysis of single images. It is the aim of this
paper to propose a new approach that moves the forensics analysis further, by considering groups of images
instead of single images. The idea is to discover dependencies among a group of images representing similar or
equal contents in order to construct a graph describing image relationships. Given the pronounced effect that
images posted on the Web have on opinions and bias in the networked age we live in, such an analysis could be
extremely useful for understanding the role of pictures in the opinion forming process. We propose a theoretical
framework for the analysis of image dependencies and describe a simple system putting the theoretical principles
in practice. The performance of the proposed system are evaluated on a few practical examples involving both
images created and processed in a controlled way, and images downloaded from the web.
Digital multimedia such as images and videos are prevalent on today's internet and cause significant
social impact, which can be evidenced by the proliferation of social networking sites with user generated
contents. Due to the ease of generating and modifying images and videos, it is critical to establish
trustworthiness for online multimedia information. In this paper, we propose novel approaches to
perform multimedia forensics using compact side information to reconstruct the processing history of
a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance.
Based on the Radon transform and scale space theory, the proposed forensic hash is compact and
can effectively estimate the parameters of geometric transforms and detect local tampering that an
image may have undergone. Forensic hash is designed to answer a broader range of questions regarding
the processing history of multimedia data than the simple binary decision from traditional robust
image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic
techniques that do not use any side information.
Content-aware resizing methods have recently been developed, among which, seam-carving has achieved the most
widespread use. Seam-carving's versatility enables deliberate object removal and benign image resizing, in which
perceptually important content is preserved. Both types of modifications compromise the utility and validity of the
modified images as evidence in legal and journalistic applications. It is therefore desirable that image forensic techniques
detect the presence of seam-carving. In this paper we address detection of seam-carving for forensic purposes. As in
other forensic applications, we pose the problem of seam-carving detection as the problem of classifying a test image in
either of two classes: a) seam-carved or b) non-seam-carved. We adopt a pattern recognition approach in which a set of
features is extracted from the test image and then a Support Vector Machine based classifier, trained over a set of
images, is utilized to estimate which of the two classes the test image lies in. Based on our study of the seam-carving
algorithm, we propose a set of intuitively motivated features for the detection of seam-carving. Our methodology for
detection of seam-carving is then evaluated over a test database of images. We demonstrate that the proposed method
provides the capability for detecting seam-carving with high accuracy. For images which have been reduced 30% by
benign seam-carving, our method provides a classification accuracy of 91%.
In digital image forensics, it is generally accepted that intentional manipulations of the image content are
most critical and hence numerous forensic methods focus on the detection of such 'malicious' post-processing.
However, it is also beneficial to know as much as possible about the general processing history of an image,
including content-preserving operations, since they can affect the reliability of forensic methods in various ways.
In this paper, we present a simple yet effective technique to detect median filtering in digital images-a widely
used denoising and smoothing operator. As a great variety of forensic methods relies on some kind of a linearity
assumption, a detection of non-linear median filtering is of particular interest. The effectiveness of our method
is backed with experimental evidence on a large image database.
This paper proposes an efficient method to determine the concrete configuration of the color filter array (CFA)
from demosaiced images. This is useful to decrease the degrees of freedom when checking for the existence or
consistency of CFA artifacts in typical digital camera images. We see applications in a wide range of multimedia
security scenarios whenever inter-pixel correlation plays an important role. Our method is based on a CFA
synthesis procedure that finds the most likely raw sensor output for a given full-color image. We present
approximate solutions that require only one linear filtering operation per image. The effectiveness of our method
is demonstrated by experimental results from a large database of images.
The popularity of video sharing platforms such as Youtube has prompted the need for the development of efficient
techniques for multimedia identification. Content fingerprinting is a promising solution for this problem, whereby
a short "fingerprint" that captures robust and unique characteristics of a signal is computed from each multimedia
document. This fingerprint is then compared with a database to identify the multimedia. Several fingerprinting
techniques have been proposed in the literature and have been evaluated using experiments. To complement
these experimental evaluations and gain a deeper understanding, this paper proposes a framework for theoretical
modeling and analysis of content fingerprinting schemes. Analysis of some key modules for fingerprint encoding
and matching are also presented under this framework.
Error correction codes of suitable redundancy are used for ensuring perfect data recovery in noisy channels. For
iterative decoding based methods, the decoder needs to be initialized with proper confidence values, called the
log likelihood ratios (LLRs), for all the embedding locations. If these confidence values or LLRs are accurately
initialized, the decoder converges at a lower redundancy factor, thus leading to a higher effective hiding rate.
Here, we present an LLR allocation method based on the image statistics, the hiding parameters and the noisy
channel characteristics. It is seen that this image-dependent LLR allocation scheme results in a higher data-rate,
than using a constant LLR across all images. The data-hiding channel parameters are learned from the image
histogram in the discrete cosine transform (DCT) domain using a linear regression framework. We also show
how the effective data-rate can be increased by suitably increasing the erasure rate at the decoder.
A number of methods have been proposed over the last decade for embedding information within deoxyribonucleic
acid (DNA). Since a DNA sequence is conceptually equivalent to a unidimensional digital signal, DNA data
embedding (diversely called DNA watermarking or DNA steganography) can be seen either as a traditional
communications problem or as an instance of communications with side information at the encoder, similar to
data hiding. These two cases correspond to the use of noncoding or coding DNA hosts, which, respectively, denote
DNA segments that cannot or can be translated into proteins. A limitation of existing DNA data embedding
methods is that none of them have been designed according to optimal coding principles. It is not possible either
to evaluate how close to optimality these methods are without determining the Shannon capacity of DNA data
embedding. This is the main topic studied in this paper, where we consider that DNA sequences may be subject
to substitution, insertion, and deletion mutations.
In this paper, we consider a low complexity identification system for highly distorted images. The performance of the
proposed identification system is analyzed based on the average probability of error. An expected improvement of the
performance is obtained combining random projection transform and concept of bit reliability. Simulations based on
synthetic and real data confirm the efficiency of the proposed approach.