In this paper, we focus on blind source cell-phone identification problem. It is known various artifacts in the image processing pipeline, such as pixel defects or unevenness of the responses in the CCD sensor, black current noise, proprietary interpolation algorithms involved in color filter array [CFA] leave telltale footprints. These artifacts, although often imperceptible, are statistically stable and can be considered as a signature of the camera type or even of the individual device. For this purpose, we explore a set of forensic features, such as binary similarity measures, image quality measures and higher order wavelet statistics in conjunction SVM classifier to identify the originating cell-phone type. We provide identification results among 9 different brand cell-phone cameras. In addition to our initial results, we applied a set of geometrical operations to original images in order to investigate how much our proposed method is robust under these manipulations.
Techniques and methodologies for validating the authenticity of digital images and testing for the presence of doctoring and manipulation operations on them has recently attracted attention. We review three categories of forensic features and discuss the design of classifiers between doctored and original images. The performance of classifiers with respect to selected controlled manipulations as well as to uncontrolled manipulations is analyzed. The tools for image manipulation detection are treated under feature fusion and decision fusion scenarios.
Classification of audio documents as bearing hidden information or not is a security issue addressed in the context of steganalysis. A cover audio object can be converted into a stego-audio object via steganographic methods. In this study we present a statistical method to detect the presence of hidden messages in audio signals. The basic idea is that, the distribution of various statistical distance measures, calculated on cover audio signals and on stego-audio signals vis-à-vis their denoised versions, are statistically different. The design of audio steganalyzer relies on the choice of these audio quality measures and the construction of a two-class classifier. Experimental results show that the proposed technique can be used to detect the presence of hidden messages in digital audio data.
In this paper, we present techniques for steganalysis of images that have been potentially subjected to a watermarking algorithm. Our hypothesis is that a particular watermarking scheme leaves statistical evidence or structure that can be exploited for detection with the aid of proper selection of image features and multivariate regression analysis. We use some sophisticated image quality metrics as the feature set to distinguish between watermarked and unwatermarked images. To identify specific quality measures, which provide the best discriminative power, we use analysis of variance (ANOVA) techniques. The multivariate regression analysis is used on the selected quality metrics to build the optimal classifier using images and their blurred versions. The idea behind blurring is that the distance between an unwatermarked image and its blurred version is less than the distance between a watermarked image and its blurred version. Simulation results with a specific feature set and a well-known and commercially available watermarking technique indicates that our approach is able to accurately distinguish between watermarked and unwatermarked images.
We present a technique that provides progressive transmission and near-lossless compression in one single framework. The proposed technique produces a bitstream that results in progressive reconstruction of the image just like what one can obtain with a reversible wavelet codec. In addition, the proposed scheme provides near-lossless reconstruction with respect to a given bound after each layer of the successively refinable bitstream is decoded. We formulate the image data compression problem as one of asking the optimal questions to determine, respectively, the value or the interval of the pixel, depending on whether one is interested in lossless or near-lossless compression. New prediction methods based on the nature of the data at a given pass are presented and links to the existing methods are explored. The trade-off between non- causal prediction and data precision is discussed within the context of successive refinement. Context selection for prediction in different passes is addressed. Finally, experimental results for both lossless and near-lossless cases are presented, which are competitive with the state-of-the-art compression schemes.
This paper presents a new distortion measure for multi-band image vector quantization. The distortion measure penalizes the deviation in the ratios of the components. We design a VQ coder for the proposed ratio distortion measure. We then give experimental results that demonstrate that the new VQ coder yields better component ratio preservation than conventional techniques. For sample images, the proposed scheme outperforms SPIHT, JPEG and conventional VQ in color ratio preservation.