On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks
Paper summary Demontis et al. study transferability of adversarial examples and data poisening attacks in the light of the targeted models gradients. In particular, they experimentally validate the following hypotheses: First, susceptibility to these attacks depends on the size of the model’s gradients; the higher the gradient, the smaller is the perturbation needed to increase the loss. Second, the size of the gradient depends on regularization. And third, the cosine between the target model’s gradients and the surrogate model’s gradients (trained to compute transferable attacks) influences vulnerability. These insights hold for both evasion and poisening attacks and are motivated by a simple linear Taylor expansion of the target model’s loss. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks
Ambra Demontis and Marco Melis and Maura Pintor and Matthew Jagielski and Battista Biggio and Alina Oprea and Cristina Nita-Rotaru and Fabio Roli
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.LG, cs.CR, stat.ML, 68T10, 68T45

more

[link]
Summary by CodyWild 8 months ago
Loading...
Your comment:
[link]
Summary by David Stutz 1 month ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and