From Event: SPIE Defense + Commercial Sensing, 2019
Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition in various computer vision applications. However, recent findings have shown that such state of the art models can be easily deceived by inserting slight imperceptible perturbations to key pixels in the input image. In this paper, we focus on deceiving Automatic Target Recognition(ATR) classiers. These classiers are built to recognize specified targets in a scene and also simultaneously identify their class types. In our work, we explore the vulnerabilities of DCNN-based target classifiers. We demonstrate significant progress in developing infrared adversarial target by adding small perturbations to the input image such that the image perturbation cannot be easily detected. The algorithm is built to adapt to both targeted and non-targeted adversarial attacks. Our findings reveal promising results that reflect serious implications of adversarial attacks.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Uche M. Osahor and Nasser M. Nasrabadi, "Design of adversarial targets: fooling deep ATR systems," Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880F (Presented at SPIE Defense + Commercial Sensing: April 16, 2019; Published: 14 May 2019); https://doi.org/10.1117/12.2518945.