Attacking the Madry Defense Model with $L_1$-based Adversarial Examples Attacking the Madry Defense Model with $L_1$-based Adversarial Examples
Paper summary Sharma and Chen provide an experimental comparison of different state-of-the-art attacks against the adversarial training defense by Madry et al. [1]. They consider several attacks, including the Carlini Wagner attacks [2], elastic net attacks [3] as well as projected gradient descent [1]. Their experimental finding – that the defense by Madry et al. Can be broken by increasing the allowed perturbation size (i.e., epsilon) – should not be surprising. Every network trained adversarially will only defend reliable against attacks from the attacker used during training. [1] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. ArXiv, 1706.06083, 2017. [2] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks.InIEEE Symposiumon Security and Privacy (SP), 39–57., 2017. [3] P.Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.J. Hsieh. Ead: Elastic-net attacks to deep neuralnetworks via adversarial examples. arXiv preprint arXiv:1709.04114, 2017. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
Attacking the Madry Defense Model with $L_1$-based Adversarial Examples
Yash Sharma and Pin-Yu Chen
arXiv e-Print archive - 2017 via Local arXiv
Keywords: stat.ML, cs.CR, cs.LG

more

[link]
Summary by David Stutz 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and