In this paper, a novel black box adversarial computer vision attack is proposed. The introduced attack is based on removing from images some components described by their Tchebichef discrete orthogonal moments, rather than to perturb them. The contribution of this work is focused on the addition of one more clue, supporting the critical hypothesis that computer vision systems fail because they support their decisions not only in robust features but also in others non-robust ones. In this, context non-robust image features described in terms of Tchebichef moments are excluded from the original images and the approximated reconstructed versions of them are used as adversarial examples in order to attack some popular deep learning models. The experiments justify the effectiveness of the proposed adversarial attack in terms of imperceptibility and recognition error rate of the deep learning classifiers. It is worth noting that the top-1 accuracy of the attacked models was degraded by a factor between 9.48%-70.89% for adversarial images of 65dB to 57dB PSNR values. The corresponding degradation of the top-5 models’ accuracy was between 6.9% and 55.14% for the same quality images. Moreover, the proposed attack seems to have more strength than the Fast Gradient Sign Method (FGSM) attacking method traditionally applying in most cases. These results reveal that the proposed attack is able to exploit the vulnerability of the deep learning models’ towards degrading their generalization abilities. In this paper, a novel black box adversarial computer vision attack is proposed. The introduced attack is based on removing from images some components described by their Tchebichef discrete orthogonal moments, rather than to perturb them. The contribution of this work is focused on the addition of one more clue, supporting the critical hypothesis that computer vision systems fail because they support their decisions not only in robust features but also in others non-robust ones. In this, context non-robust image features described in terms of Tchebichef moments are excluded from the original images and the approximated reconstructed versions of them are used as adversarial examples in order to attack some popular deep learning models. The experiments justify the effectiveness of the proposed adversarial attack in terms of imperceptibility and recognition error rate of the deep learning classifiers. It is worth noting that the top-1 accuracy of the attacked models was degraded by a factor between 9.48%-70.89% for adversarial images of 65dB to 57dB PSNR values. The corresponding degradation of the top-5 models’ accuracy was between 6.9% and 55.14% for the same quality images. Moreover, the proposed attack seems to have more strength than the Fast Gradient Sign Method (FGSM) attacking method traditionally applying in most cases. These results reveal that the proposed attack is able to exploit the vulnerability of the deep learning models’ towards degrading their generalization abilities.
Evolution in machine vision systems brought up lately some matters of security. Adversarial computer vision is the field that deals with these matters, producing either adversarial attack proposals or defensive strategies and techniques against them. This article is a review of computer vision security threats and defensive techniques that have been proposed by researchers until now and intents to become a guide for any researcher who is interested to work in the field of adversarial computer vision. Initially, a short history of the subject and main interests of the researchers in this field are presented. After the most important proposed attacks based on adversarial examples and their integrity are analyzed and an updated taxonomy of adversarial computer vision attacks is proposed. Finally, the defensive strategies and techniques that have been proposed are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.