In deep steganography, the model size is usually related to the grid resolution of the underlying layer, and a separate neural network needs to be trained as a message extractor. We propose image steganography based on generative implicit neural representation, which breaks through the limitation of image resolution using a continuous function to represent image data and allows various kinds of multimedia data to be used as the cover image for steganography, which theoretically extends the class of carriers. Fixing a neural network as a message extractor, and transferring the training of the network to the training of the image itself, reduces the training cost and avoids the problem of exposing the steganographic behavior caused by the transmission of the message extractor. The experiment proves that the scheme is efficient, and it only takes 3 s to complete the optimization for an image with a resolution of 64×64 and a hiding capacity of 1 bpp, and the accuracy of message extraction reaches 100%.
Multi-image hiding, which embeds multiple secret images into a cover image and is able to recover these images with high quality, has gradually become a research hotspot in the field of image steganography. However, due to the need to embed a large amount of data in a limited cover image space, issues such as contour shadowing or color distortion often arise, posing significant challenges for multi-image hiding. We propose StegaINR4MIH, a implicit neural representation steganography framework that enables the hiding of multiple images within a single implicit representation function. In contrast to traditional methods that use multiple encoders to achieve multi-image embedding, our approach leverages the redundancy of implicit representation function parameters and employs magnitude-based weight selection and secret weight substitution on pre-trained cover image functions to effectively hide and independently extract multiple secret images. We conduct experiments on images with a resolution from three different datasets: CelebA-HQ, COCO, and DIV2K. When hiding two secret images, the PSNR values of both the secret images and the stego images exceed 42. When hiding five secret images, the PSNR values of both the secret images and the stego images exceed 39. Extensive experiments demonstrate the superior performance of the proposed method in terms of visual quality and undetectability.
KEYWORDS: Education and training, Steganography, Data modeling, 3D modeling, Neural networks, Cameras, Receivers, Performance modeling, 3D image processing, Overfitting
The implicit neural representation of visual data (such as images, videos, and 3D models) has become a current hotspot in computer vision research. This work proposes a cover selection steganography scheme for neural radiance fields (NeRFs). The message sender first trains an NeRF model selecting any viewpoint in 3D space as the viewpoint key Kv, to generate a unique secret viewpoint image. Subsequently, a message extractor is trained using overfitting to establish a one-to-one mapping between the secret viewpoint image and the secret message. To address the issue of securely transmitting the message extractor in traditional steganography, the message extractor is concealed within a hybrid model performing standard classification tasks. The receiver possesses a shared extractor key Ke, which is used to recover the message extractor from the hybrid model. Then the secret viewpoint image is obtained by NeRF through the viewpoint key Kv, and the secret message is extracted by inputting it into the message extractor. Experimental results demonstrate that the trained message extractor achieves high-speed steganography with a large capacity and attains a 100% message embedding. Additionally, the vast viewpoint key space of NeRF ensures the concealment of the scheme.
KEYWORDS: Video, Digital watermarking, Data hiding, Steganography, Education and training, Video acceleration, Receivers, Convolution, Visualization, Video processing
Generative steganography is a research hotspot particularly with regard to information hiding, which involves the hiding of secret information by generating sufficiently “real” secret media. In recent years, generative steganography schemes have made significant progress in images, but the field of video steganography is still in the exploratory stage. Combined with deep convolutional generative adversarial nets (DCGAN), a semi-generative video steganography scheme based on a digital Carden Grille was proposed. A dual-stream video generation network based on DCGAN was designed to generate three parts of videos: foreground, background, and mask; generation network produced different videos with random noise. The digital Carden Grille is used as the key for embedding and extraction. The sender can generate a digital Carden Grille in the mask through two different methods, reasonably assign the embedding capacity among RGB channels, and use video pixels as the carrier to achieve the information embedding in a semi-generative way. The receiver can determine the embedding position of information through the Carden Grille and extract the secret information hidden in the pixels. Experimental results show that the stego video generated by this scheme has good visual quality, with a Frechet inception distance score of 92. The embedded capacity is better than the existing generative steganography schemes, up to 0.12 bpp. Using Syndrome Trellis Coding, the proposed scheme can transmit secret messages more efficiently and securely.
KEYWORDS: Video, Convolution, Digital watermarking, Video compression, Visualization, Steganography, Network architectures, Data modeling, Data hiding, Video processing
In robust video steganography, a message is embedded into a video such that video distortions are avoided while producing a stego video of imperceptible difference from the cover video. Traditional techniques achieved robustness against particular distortions but are complicated in computation and design, and rely on different compression standards. Nowadays, deep-learning-based methods can achieve impressive visual quality and robustness to attacks. We propose a framework with a channel-space attention mechanism for robust video steganography. The framework is composed of depthwise separable convolution layers that can learn channel-space segments for embedding and extraction. The secret messages are distributed across channel-space scales to increase imperceptibility and robustness to distortions. This end-to-end solution is trained with the 3-player game approach to conducting robust steganography, where three networks compete. Two of these handle embedding and extraction operations, while the third network simulates attacks and detection from a steganalyst as an adversarial network. Comparative results versus recent research show that our method is more robust against compression and video distortion attacks. Peak signal-to-noise ratio and structural similarity index were used for evaluating visual quality and demonstrate the imperceptibility of our method.
Robust steganography enables secret information to be transmitted stealthily and accurately in lossy channels such as social channels and wireless channels. With the development of deep learning, robust steganography can be constructed using the generative model of deep neural networks. Two new robust steganographic frameworks are proposed on the basis of generative models, and two algorithms are proposed on these two frameworks to verify the effectiveness of the proposed framework. Experiments show that the two frameworks proposed are more flexible than existing robust steganographic frameworks. To further verify the validity of the framework, when compared with existing robust steganography based on deep learning, the generative robust steganography algorithm is shown to have a higher secret information embedding capacity and higher steganography image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.