Ever since human society entered the age of social media, every user has had a considerable amount of visual content stored online and shared in variant virtual communities. As an efficient information circulation measure, disastrous consequences are possible if the contents of images are tampered with by malicious actors. Specifically, we are witnessing the rapid development of machine learning (ML) based tools like DeepFake apps. They are capable of exploiting images on social media platforms to mimic a potential victim without their knowledge or consent. These content manipulation attacks can lead to the rapid spread of misinformation that may not only mislead friends or family members but also has the potential to cause chaos in public domains. Therefore, robust image authentication is critical to detect and filter off manipulated images. In this paper, we introduce a system that accurately AUthenticates SOcial MEdia images (AUSOME) uploaded to online platforms leveraging spectral analysis and ML. Images from DALL-E 2 are compared with genuine images from the Stanford image dataset. Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) are used to perform a spectral comparison. Additionally, based on the differences in their frequency response, an ML model is proposed to classify social media images as genuine or AI-generated. Using real-world scenarios, the AUSOME system is evaluated on its detection accuracy. The experimental results are encouraging and they verified the potential of the AUSOME scheme in social media image authentications.
Deep neural networks (DNN) have been studied intensively in recent years, leading to many practical applications. However, there are also concerns about the security problems and vulnerabilities of DNN. Studies on adversarial network development have shown that relatively more minor perturbations can impact the DNN performance and manipulate its outcome. The impacts of adversarial perturbations have led to the development of advanced techniques for generating image-level perturbations. Once embedded in a clean image, these perturbations are not perceptible to human eyes and fool a well-trained deep learning (DL) convolutional neural network (CNN) classifier. This work introduces a new Critical-Pixel Iterative (CriPI) algorithm after a thorough study on critical pixels’ characteristics. The proposed CriPI algorithm can identify the critical pixels and generate one-pixel attack perturbations with a much higher efficiency. Compared to a one-pixel attack benchmark algorithm, the CriPI algorithm significantly reduces the time delay of the attack from seven minutes to one minute with similar success rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.