Exploring the Hyperparameter Landscape of Adversarial Robustness Exploring the Hyperparameter Landscape of Adversarial Robustness
Paper summary Duesterwald et al. study the influence of hyperparameters on adversarial training and its robustness as well as accuracy. As shown in Figure 1, the chosen parameters, the ratio of adversarial examples per batch and the allowed perturbation $\epsilon$, allow to control the trade-off between adversarial robustness and accuracy. Even for larger $\epsilon$, at least on MNIST and SVHN, using only few adversarial examples per batch increases robustness significantly while only incurring a small loss in accuracy. https://i.imgur.com/nMZNpFB.jpg Figure 1: Robustness (red) and accuracy (blue) depending on the two hyperparameters $\epsilon$ and ratio of adversarial examples per batch. Robustness is measured in adversarial accuracy. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
scholar.google.com
Exploring the Hyperparameter Landscape of Adversarial Robustness
Duesterwald, Evelyn and Murthi, Anupama and Venkataraman, Ganesh and Sinn, Mathieu and Vijaykeerthy, Deepak
- 2019 via Local Bibsonomy
Keywords: adversarial, robustness


[link]
Summary by David Stutz 8 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and