Adversarial Vulnerability of Neural Networks Increases With Input Dimension Adversarial Vulnerability of Neural Networks Increases With Input Dimension
Paper summary Simon-Gabriel et al. Study the robustness of neural networks with respect to the input dimensionality. Their main hypothesis is that the vulnerability of neural networks against adversarial perturbations increases with the input dimensionality. To support this hypothesis, they provide a theoretical analysis as well as experiments. The general idea of robustness is that small perturbations $\delta$ of the input $x$ do only result in small variations $\delta \mathcal{L}$ of the loss: $\delta \mathcal{L} = \max_{\|\delta\| \leq \epsilon} |\mathcal{L}(x + \delta) - \mathcal{L}(x)| \approx \max_{\|\delta\| \leq \epsilon} |\partial_x \mathcal{L} \cdot \delta| = \epsilon \||\partial_x \mathcal{L}\||$ where the approximation is due to a first-order Taylor expansion and $\||\cdot\||$ is the dual norm of $\|\cdot\|$. As a result, the vulnerability of networks can be quantified by considering $\epsilon\mathbb{E}_x\||\partial_x \mathcal{L}\||$. A natural regularizer to increase robustness (i.e. decrease vulnerability) would be $\epsilon \||\partial_x \mathcal{L}\||$ which is a similar regularizer as proposed in [1]. The remainder of the paper studies the norm $\|\partial_x \mathcal{L}\|$ with respect to the input dimension $d$. Specifically, they show that the gradient norm increases monotonically with the input dimension. I refer to the paper for the exact theorems and proofs. This claim is based on the assumption of non-trained networks that have merely been initialized. However, in experiments, they show that the conclusion may hold true in realistic settings, e.g. on ImageNet. [1] Matthias Hein, Maksym Andriushchenko: Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation. NIPS 2017: 2263-2273 Also view this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
Adversarial Vulnerability of Neural Networks Increases With Input Dimension
Carl-Johann Simon-Gabriel and Yann Ollivier and Léon Bottou and Bernhard Schölkopf and David Lopez-Paz
arXiv e-Print archive - 2018 via Local arXiv
Keywords: stat.ML, cs.CV, cs.LG, 68T45, I.2.6

more

Summary by David Stutz 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and