Adversarial Machine Learning at Scale Adversarial Machine Learning at Scale
Paper summary Kurakin et al. present some larger scale experiments using adversarial training on ImageNet to increase robustness. In particular, they claim to be the first using adversarial training on ImageNet. Furthermore, they provide experiments underlining the following conclusions: - Adversarial training can also be seen as regularizer. This, however, is not surprising as training on noisy training samples is also known to act as regularization. - Label leaking describes the observation that an adversarially trained model is able to defend against (i.e. correctly classify) an adversarial example which has been computed by knowing to true label while not defending against adversarial examples that were crafted without knowing the true label. This means that crafting adversarial examples without guidance by the true label might be beneficial (in terms of a stronger attack). - Model complexity seems to have an impact on robustness after adversarial training. However, from the experiments, it is hard to deduce how this connection might look exactly. Also see this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
Adversarial Machine Learning at Scale
Alexey Kurakin and Ian Goodfellow and Samy Bengio
arXiv e-Print archive - 2016 via Local arXiv
Keywords: cs.CV, cs.CR, cs.LG, stat.ML

more

Summary by David Stutz 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and