Presentation + Paper
14 May 2019 Design of adversarial targets: fooling deep ATR systems
Author Affiliations +
Abstract
Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition in various computer vision applications. However, recent findings have shown that such state of the art models can be easily deceived by inserting slight imperceptible perturbations to key pixels in the input image. In this paper, we focus on deceiving Automatic Target Recognition(ATR) classiers. These classiers are built to recognize specified targets in a scene and also simultaneously identify their class types. In our work, we explore the vulnerabilities of DCNN-based target classifiers. We demonstrate significant progress in developing infrared adversarial target by adding small perturbations to the input image such that the image perturbation cannot be easily detected. The algorithm is built to adapt to both targeted and non-targeted adversarial attacks. Our findings reveal promising results that reflect serious implications of adversarial attacks.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Uche M. Osahor and Nasser M. Nasrabadi "Design of adversarial targets: fooling deep ATR systems", Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880F (14 May 2019); https://doi.org/10.1117/12.2518945
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Automatic target recognition

Target detection

Classification systems

Detection and tracking algorithms

Target recognition

Image classification

Image segmentation

Back to Top