You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
12 November 2019Utilizing full neuronal states for adversarial robustness
Small, imperceptible perturbations of the input data can lead to DNNs making egregious errors during inference, such as misclassifying an image of a dog as a cat with high probability. Thus, defending against adversarial examples for deep neural networks (DNNs) is of great interest to sensing technologies and the machine learning community to ensure the security of practical systems where DNNs are used. Whereas many approaches have been explored for defending against adversarial attacks, few have made use of the full state of the entire network, opting instead to only consider the output layer and gradient information. We develop several motivated techniques that make use of the full network state, improving adversarial robustness. We provide principled motivation of our techniques via analysis of attractor dynamics, shown to occur in the highly recurrent human brain, and validate our improvements via empirical results on standard datasets and white-box attacks.
The alert did not successfully save. Please try again later.
Alex Gain, Hava T. Siegelmann, "Utilizing full neuronal states for adversarial robustness," Proc. SPIE 11197, SPIE Future Sensing Technologies, 1119712 (12 November 2019); https://doi.org/10.1117/12.2542804