Current image re-sampling detectors can reliably detect re-sampling in JPEG images only up to a Quality Factor (QF) of
95 or higher. At lower QFs, periodic JPEG blocking artifacts interfere with periodic patterns of re-sampling. We add a
controlled amount of noise to the image before the re-sampling detection step. Adding noise suppresses the JPEG
artifacts while the periodic patterns due to re-sampling are partially retained. JPEG images of QF range 75-90 are
considered. Gaussian/Uniform noise in the range of 28-24 dB is added to the image and the images thus formed are
passed to the re-sampling detector. The detector outputs are averaged to get a final output from which re-sampling can
be detected even at lower QFs.
We consider two re-sampling detectors - one proposed by Poposcu and Farid , which works well on uncompressed
and mildly compressed JPEG images and the other by Gallagher , which is robust on JPEG images but can detect only
scaled images. For multiple re-sampling operations (rotation, scaling, etc) we show that the order of re-sampling matters.
If the final operation is up-scaling, it can still be detected even at very low QFs.
Error correction codes of suitable redundancy are used for ensuring perfect data recovery in noisy channels. For
iterative decoding based methods, the decoder needs to be initialized with proper confidence values, called the
log likelihood ratios (LLRs), for all the embedding locations. If these confidence values or LLRs are accurately
initialized, the decoder converges at a lower redundancy factor, thus leading to a higher effective hiding rate.
Here, we present an LLR allocation method based on the image statistics, the hiding parameters and the noisy
channel characteristics. It is seen that this image-dependent LLR allocation scheme results in a higher data-rate,
than using a constant LLR across all images. The data-hiding channel parameters are learned from the image
histogram in the discrete cosine transform (DCT) domain using a linear regression framework. We also show
how the effective data-rate can be increased by suitably increasing the erasure rate at the decoder.
In this paper we attempt to quantify the "active" steganographic capacity - the maximum rate at which data can be hidden, and correctly decoded, in a multimedia cover subject to noise/attack (hence - active), perceptual distortion criteria, and statistical steganalysis. Though work has been done in studying the capacity of data hiding as well as the rate of perfectly secure data hiding in noiseless channels, only very recently have all the constraints been considered together. In this work, we seek to provide practical estimates of steganographic capacity in natural images, undergoing realistic attacks, and using data hiding methods available today. We focus here on the capacity of an image data hiding channel characterized by the use of statistical restoration to satisfy the constraint of perfect security (under an i.i.d. assumption), as well as JPEG and JPEG-2000 attacks. Specifically we provide experimental results of the statistically secure hiding capacity on a set of several hundred images for hiding in a pre-selected band of frequencies, using the discrete cosine and wavelet transforms, where a perturbation of the quantized transform domain terms by ±1 using the quantization index modulation scheme, is considered to be perceptually transparent. Statistical security is with respect to the matching of marginal statistics of the quantized transform domain terms.
We present further extensions of <i>yet another steganographic scheme </i>(YASS), a method based on embedding data in randomized locations so as to resist blind steganalysis. YASS is a JPEG steganographic technique that hides data in the discrete cosing transform (DCT) coefficients of randomly chosen image blocks. Continuing to focus on JPEG image steganography, we present, in this paper, a further study on YASS with the goal of improving the rate of embedding. Following are the two main improvements presented in this paper: (i) a method that randomizes the quantization matrix used on the transform domain coefficients, and (ii) an iterative hiding method that utilizes the fact that the JPEG "attack" that causes errors in the hidden bits is actually known to the encoder. We show that using both these approaches, the embedding rate can be increased while maintaining the same level of undetectability (as the original YASS scheme). Moreover, for the same embedding rate, the proposed steganographic schemes are more undetectable than the popular matrix embedding based F5 scheme, using features proposed by Pevny and Fridrich for blind steganalysis.
A video "fingerprint" is a feature extracted from the video that should represent the video compactly, allowing faster search without compromising the retrieval accuracy. Here, we use a keyframe set to represent a video, motivated by the video summarization approach. We experiment with different features to represent each keyframe with the goal of identifying duplicate and similar videos. Various image processing operations like blurring, gamma correction, JPEG compression, and Gaussian noise addition are applied on the individual video frames to generate duplicate videos. Random and bursty frame drop errors of 20%, 40% and 60% (over the entire video) are also applied to create more noisy "duplicate" videos. The similar videos consist of videos with similar content but with varying camera angles, cuts, and idiosyncrasies that occur during successive retakes of a video. Among the feature sets used for comparison, for duplicate video detection, Compact Fourier-Mellin Transform (CFMT) performs the best while for similar video retrieval, Scale Invariant Feature Transform (SIFT) features are found to be better than comparable-dimension features. We also address the problem of retrieval of full-length videos with shorter-length clip queries. For identical feature size, CFMT performs the best for video retrieval.
We have investigated adaptive mechanisms for high-volume transform-domain data hiding in MPEG-2 video
which can be tuned to sustain varying levels of compression attacks. The data is hidden in the uncompressed domain
by scalar quantization index modulation (QIM) on a selected set of low-frequency discrete cosine transform
(DCT) coefficients. We propose an adaptive hiding scheme where the embedding rate is varied according to the
type of frame and the reference quantization parameter (decided according to MPEG-2 rate control scheme) for
that frame. For a 1.5 Mbps video and a frame-rate of 25 frames/sec, we are able to embed almost 7500 bits/sec.
Also, the adaptive scheme hides 20% more data and incurs significantly less frame errors (frames for which the
embedded data is not fully recovered) than the non-adaptive scheme. Our embedding scheme incurs insertions
and deletions at the decoder which may cause de-synchronization and decoding failure. This problem is solved
by the use of powerful turbo-like codes and erasures at the encoder. The channel capacity estimate gives an idea
of the minimum code redundancy factor required for reliable decoding of hidden data transmitted through the
channel. To that end, we have modeled the MPEG-2 video channel using the transition probability matrices
given by the data hiding procedure, using which we compute the (hiding scheme dependent) channel capacity.