Exploring the Landscape of Spatial Robustness
Logan Engstrom
and
Brandon Tran
and
Dimitris Tsipras
and
Ludwig Schmidt
and
Aleksander Madry
arXiv e-Print archive - 2017 via arXiv
Keywords:
cs.LG, cs.CV, cs.NE, stat.ML
First published: 2017/12/07 (6 years ago) Abstract: The study of adversarial robustness has so far largely focused on
perturbations bound in p-norms. However, state-of-the-art models turn out to be
also vulnerable to other, more natural classes of perturbations such as
translations and rotations. In this work, we thoroughly investigate the
vulnerability of neural network--based classifiers to rotations and
translations. While data augmentation offers relatively small robustness, we
use ideas from robust optimization and test-time input aggregation to
significantly improve robustness. Finally we find that, in contrast to the
p-norm case, first-order methods cannot reliably find worst-case perturbations.
This highlights spatial robustness as a fundamentally different setting
requiring additional study. Code available at
https://github.com/MadryLab/adversarial_spatial and
https://github.com/MadryLab/spatial-pytorch.
Engstrom et al. demonstrate that spatial transformations such as translations and rotations can be used to generate adversarial examples. Personally, however, I think that the paper does not address the question where adversarial perturbations “end” and generalization issues “start”. For larger translations and rotations, the problem is clearly a problem of generalization. Small ones could also be interpreted as adversarial perturbations – especially when they are computed under the intention to fool the network. Still, the distinction is not clear ...
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).