Robustness of Rotation-Equivariant Networks to Adversarial PerturbationsRobustness of Rotation-Equivariant Networks to Adversarial PerturbationsDumont, Beranger and Maggio, Simona and Montalvo, Pablo2018
Paper summarydavidstutzDumont et al. Compare different adversarial transformation attacks (including rotations and translations) against common as well as rotation-invariant convolutional neural networks. On MNIST, CIFAR-10 and ImageNet, they consider translations, rotations as well as the attack of  based on spatial transformer networks. Additionally, they consider rotation-invariant convolutional neural networks – however, both the attacks and the networks are not discussed/introduced in detail. The results are interesting because translation- and rotation-based attacks are significantly more successful on CIFAR-10 compared to MNIST and ImageNet. The authors, however, do not give a satisfying explanation of this observation.
 C. Xiao, J.-Y. Zhu, B. Li, W. H, M. Liu, D. Song. Spatially-Transformed Adversarial Examples. ICLR, 2018.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).