In our work, we make two primary contributions to the field of adversarial example generation for convolutional neural network based perception technologies. First of all, we extend recent work on physically realizable adversarial examples to make them more robust to translation, rotation, and scale in real-world scenarios. Secondly, we demonstrate attacks against object detection neural networks rather than considering only the simpler problem of classification, demonstrating the ability to force these networks to mislocalize as well as misclassify. We demonstrate our method on multiple object detection frameworks, including Faster R-CNN, YOLO v3, and our own single-shot detection architecture.
David R. Chambers and H. Abe Garza, "Physically realizable adversarial examples for convolutional object detection algorithms," Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880R (Presented at SPIE Defense + Commercial Sensing: April 17, 2019; Published: 14 May 2019); https://doi.org/10.1117/12.2520166.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the proceedings. They include the speaker's narration with video of the slides and animations. Most include full-text papers. Interactive, searchable transcripts and closed captioning are now available for most presentations.
Search our growing collection of more than 18,000 conference presentations, including many plenaries and keynotes.