Recently, adversarial examples become one of the most dangerous risks in deep learning, which affects applications of real world such as robotics, cyber-security and computer vision. In image classification, adversarial attacks showed the ability to fool classifiers with small imperceptible perturbations added to the input. In this paper, we present an efficient defense mechanism, we call DVAE-SR that combine variational autoencoder and super-resolution to eliminate adversarial perturbation from image input before feeding it to the CNN classifier. The DVAE-SR can successfully defend against both white-box and black-box attacks without retraining CNN classifier and it recovers better accuracy than Defense-GAN and Defense-VAE.
|