Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization
Paper summary Shaham et al. provide an interpretation of adversarial training in the context of robust optimization. In particular, adversarial training is posed as min-max problem (similar to other related work, as I found): $\min_\theta \sum_i \max_{r \in U_i} J(\theta, x_i + r, y_i)$ where $U_i$ is called the uncertainty set corresponding to sample $x_i$ – in the context of adversarial examples, this might be an $\epsilon$-ball around the sample quantifying the maximum perturbation allowed; $(x_i, y_i)$ are training samples, $\theta$ the parameters and $J$ the trianing objective. In practice, when the overall minimization problem is tackled using gradient descent, the inner maximization problem cannot be solved exactly (as this would be inefficient). Instead Shaham et al. Propose to alternatingly make single steps both for the minimization and the maximization problems – in the spirit of generative adversarial network training. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization
Uri Shaham and Yutaro Yamada and Sahand Negahban
arXiv e-Print archive - 2015 via Local arXiv
Keywords: stat.ML, cs.LG, cs.NE

more

Summary by David Stutz 3 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and