Digital steganography is the art of hiding information in multimedia
content, such that it remains perceptually and statistically unchanged. The detection of such covert communication is referred to as steganalysis. To date, steganalysis research has focused primarily on either, the extraction of features from a document that are sensitive to the embedding, or the inference of some statistical difference between marked and unmarked objects. In this work, we evaluate the statistical limits of such techniques by developing asymptotically optimal tests (Maximum Likelihood) for a number of side informed embedding schemes. The required probability density functions (pdf) are derived for Dither Modulation (DM) and Distortion-Compensated Dither Modulation (DC-DM/SCS) from an steganalyst's point of view. For both embedding techniques, the pdfs are derived in the presence and absence of a secret dither key. The resulting tests are then compared to a robust blind steganalytic test based on feature extraction. The performance of the tests is evaluated using an integral measure and receiver operating characteristic (ROC) curves.
The vulnerability of quantization-based data hiding schemes to amplitude scaling has required the formulation of countermeasures to this relatively simple attack. Parameter estimation is one approach, where the applied scaling is estimated from the received signal at the decoder. As scaling of the watermarked signal creates a mismatch with respect to the quantization step assumed by the decoder, this estimate can be used to correct the mismatch prior to decoding. In this work we first review previous approaches utilizing parameter estimation as a means of combating the scaling attack on DC-DM. We then present a method for maximum likelihood estimation of the scaling factor for this quantization-based method. Using iteratively decodable codes in conjunction with DC-DM, the estimation method exploits the reliabilities provided by the near-optimal decoding process in order to iteratively refine the estimate of the applied scaling. By performing estimation in cooperation with the decoding process, the complexity of which is tackled using the expectation maximization algorithm, reliable estimation is possible at very low watermark-to-noise power ratios by using sufficiently low rate codes.
The application of error correction coding to side-informed watermarking utilizing polynomial detectors is investigated.
The overall system is viewed as a code concatenation in which the outer code is a powerful channel
code and the inner code is a low rate repetition code. For the inner code we adopt our previously proposed
side-informed embedding scheme in which the watermark direction is set to the gradient of the detection function
in order to reduce the effect of host signal interference. Turbo codes are employed as the outer code due
to their near capacity performance. The overall rate of the concatenation is kept constant while parameters
of the constituent codes are varied. For the inner code, the degree of non-linearity of the detector along with
repetition rate is varied. For a given embedding and attack strength, we determine empirically the best rate
combinations for constituent codes. The performance of the scheme is evaluated in terms of bit error rate when
subjected to various attacks such as additive/multiplicative noise and scaling by a constant factor. We compare
the performance of the proposed scheme to the Spread Transform Scalar Costa Scheme using the same rates
when subjected to the same attacks.