DPATCH: An Adversarial Patch Attack on Object Detectors DPATCH: An Adversarial Patch Attack on Object Detectors
Paper summary Liu et al. propose DPatch, adversarial patches against state-of-the-art object detectors. Similar to existing adversarial patches, where a patch with fixed pixels is placed in an image in order to evade (or change) classification, the authors compute their DPatch using an optimization procedure. During optimization, the patch to be optimized is placed in random locations on all images of, e.g. on PASCAL VOC 2007, and the pixels are updated in order to maximize the loss of the classifier (either in a targeted setting or in an untargeted setting). In experiments, this approach is able to fool several different detectors. Using small $40\times40$ pixel patches as illustrated in Figure 1. https://i.imgur.com/ma6hGNO.jpg Figure 1: Illustration of the use case of DPatch. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
doi.org
sci-hub
scholar.google.com
DPATCH: An Adversarial Patch Attack on Object Detectors
Liu, Xin and Yang, Huanrui and Liu, Ziwei and Song, Linghao and Chen, Yiran and Li, Hai
AAAI Conference on Artificial Intelligence - 2019 via Local Bibsonomy
Keywords: dblp


[link]
Summary by David Stutz 8 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and