Steganographic embedding is generally guided by two performance constraints at the encoder. Firstly, as is typical in the field of watermarking, all the transmission codewords must conform to an average power constraint. Secondly, for the embedding to be statistically undetectable (secure), it is required that the density of the watermarked signal must be equal to the density of the host signal. Assuming that this is not the case, statistical steganalysis will have a probability of detection error less than 1/2 and the communication may be terminated. Recent work has shown that some common watermarking algorithms can be modified such that both constraints are met. In particular, spread spectrum (SS) communication can be secured by a specific scaling of the host before embedding. Also, a side informed scheme called stochastic quantization index modulation (SQIM), maintains security with the use of an additive stochastic element during the embedding. In this work the performance of both techniques is analysed under the AWGN channel assumption. It will be seen that the robustness of both schemes is lessened by the steganographic constraints, when compared to the standard algorithms on which they are based. Specifically, the probability of decoding error in the SS technique increases when security is required, and the achievable rate of SQIM is shown to be lower than that of dither modulation (on which the scheme is based) for a finite alphabet size.
Digital steganography is the art of hiding information in multimedia
content, such that it remains perceptually and statistically unchanged. The detection of such covert communication is referred to as steganalysis. To date, steganalysis research has focused primarily on either, the extraction of features from a document that are sensitive to the embedding, or the inference of some statistical difference between marked and unmarked objects. In this work, we evaluate the statistical limits of such techniques by developing asymptotically optimal tests (Maximum Likelihood) for a number of side informed embedding schemes. The required probability density functions (pdf) are derived for Dither Modulation (DM) and Distortion-Compensated Dither Modulation (DC-DM/SCS) from an steganalyst's point of view. For both embedding techniques, the pdfs are derived in the presence and absence of a secret dither key. The resulting tests are then compared to a robust blind steganalytic test based on feature extraction. The performance of the tests is evaluated using an integral measure and receiver operating characteristic (ROC) curves.
Steganalysis is the art of detecting and/or decoding secret messages embedded in multimedia contents. The topic
has received considerable attention in recent years due to the malicious use of multimedia documents for covert
communication. Steganalysis algorithms can be classified as either blind or non-blind depending on whether or
not the method assumes knowledge of the embedding algorithm. In general, blind methods involve the extraction
of a feature vector that is sensitive to embedding and is subsequently used to train a classifier. This classifier can
then be used to determine the presence of a stego-object, subject to an acceptable probability of false alarm. In
this work, the performance of three classifiers, namely Fisher linear discriminant (FLD), neural network (NN)
and support vector machines (SVM), is compared using a recently proposed feature extraction technique. It
is shown that the NN and SVM classifiers exhibit similar performance exceeding that of the FLD. However,
steganographers may be able to circumvent such steganalysis algorithms by preserving the statistical transparency
of the feature vector at the embedding. This motivates the use of classification algorithms based on the entire
document. Such a strategy is applied using SVM classification for DCT, FFT and DWT representations of an
image. The performance is compared to a feature extraction technique.