Adversarial Examples: Attacks and Defenses for Deep Learning Adversarial Examples: Attacks and Defenses for Deep Learning
Paper summary Yuan et al. present a comprehensive survey of attacks, defenses and studies regarding the robustness and security of deep neural networks. Published on ArXiv in December 2017, it includes most recent attacks and defenses. For examples, Table 1 lists all known attacks – Yuan et al. categorize the attacks according to the level of knowledge needed, targeted or non-targeted, the optimization needed (e.g. iterative) as well as the perturbation measure employed. As a result, Table 1 gives a solid overview of state-of-the-art attacks. Similarly, Table 2 gives an overview of applications reported so far. Only for defenses, a nice overview table is missing. Still, the authors discuss (as of my knowledge) all relevant defense strategies and comment on their performance reported in the literature. https://i.imgur.com/3KpoYWr.png Table 1: An overview of state-of-the-art attacks on deep neural networks. https://i.imgur.com/4eq6Tzm.png Table 2: An overview of application sof some of the attacks in Table 1.
arxiv.org
arxiv-sanity.com
scholar.google.com
Adversarial Examples: Attacks and Defenses for Deep Learning
Xiaoyong Yuan and Pan He and Qile Zhu and Rajendra Rana Bhat and Xiaolin Li
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.LG, cs.CR, cs.CV, stat.ML

more

Summary by David Stutz 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and