Translator Disclaimer
12 November 2019 Utilizing full neuronal states for adversarial robustness
Author Affiliations +
Small, imperceptible perturbations of the input data can lead to DNNs making egregious errors during inference, such as misclassifying an image of a dog as a cat with high probability. Thus, defending against adversarial examples for deep neural networks (DNNs) is of great interest to sensing technologies and the machine learning community to ensure the security of practical systems where DNNs are used. Whereas many approaches have been explored for defending against adversarial attacks, few have made use of the full state of the entire network, opting instead to only consider the output layer and gradient information. We develop several motivated techniques that make use of the full network state, improving adversarial robustness. We provide principled motivation of our techniques via analysis of attractor dynamics, shown to occur in the highly recurrent human brain, and validate our improvements via empirical results on standard datasets and white-box attacks.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Alex Gain and Hava T. Siegelmann "Utilizing full neuronal states for adversarial robustness", Proc. SPIE 11197, SPIE Future Sensing Technologies, 1119712 (12 November 2019);

Back to Top