Generally, we expect images to be an honest reflection of reality. However, this assumption is undermined by the new image editing technology, which allows for easy manipulation and distortion of digital contents. Our understanding of the implications related to the use of a manipulated data is lagging behind. In this paper we propose to exploit crowdsourcing tools in order to analyze the impact of different types of manipulation on users’ perceptions of deception. Our goal is to gain significant insights about how different types of manipulations impact users’ perceptions and how the context in which a modified image is used influences human perception of image deceptiveness. Through an extensive crowdsourcing user study, we aim at demonstrating that the problem of predicting user-perceived deception can be approached by automatic methods. Analysis of results collected on Amazon Mechanical Turk platform highlights how deception is related to the level of modifications applied to the image and to the context within modified pictures are used. To the best of our knowledge, this work represents the first attempt to address to the image editing debate using automatic approaches and going beyond investigation of forgeries.
Source identification for digital content is one of the main branches of digital image forensics. It relies on the
extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently
characterizes the digital device which generated the content. Such noise is estimated as the difference between
the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance
comparison of different denoising filters for source identification purposes. In particular, results achieved with
a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously
employed in such a context.
Nowadays, sophisticated computer graphics editors lead to a significant increase in the photorealism of images.
Thus, computer generated (CG) images result to be convincing and hard to be distinguished from real ones at
a first glance. Here, we propose an image forensics technique able to automatically detect local forgeries, i.e.,
objects generated via computer graphics software inserted in natural images, and vice versa. We develop a novel
hybrid classifier based on wavelet based features and sophisticated pattern noise statistics. Experimental results
show the effectiveness of the proposed approach.
In this paper we propose to evaluate both robustness and security of digital image watermarking techniques by
considering the perceptual quality of un-marked images in terms of Weightened PSNR. The proposed tool is based on
genetic algorithms and is suitable for researchers to evaluate robustness performances of developed watermarking
methods. Given a combination of selected attacks, the proposed framework looks for a fine parameterization of them
ensuring a perceptual quality of the un-marked image lower than a given threshold. Correspondingly, a novel metric for
robustness assessment is introduced. On the other hand, this tool results to be useful also in those scenarios where an
attacker tries to remove the watermark to overcome copyright issues. Security assessment is provided by a stochastic
search of the minimum degradation that needs to be introduced in order to obtain an un-marked version of the image as
close as possible to the given one. Experimental results show the effectiveness of the proposed approach.
Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key
for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing
both to cipher watermarked data and to mark encrypted data without interphering with the detection process.
In order to demonstrate the effectiveness of the above principles, we present an explicit example where the
watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random
permutation, are integrated in a commutative scheme.