Towards the first adversarially robust neural network model on MNISTTowards the first adversarially robust neural network model on MNISTLukas Schott and Jonas Rauber and Matthias Bethge and Wieland Brendel2018
Paper summarydavidstutzSchott et al. propose an analysis-by-synthetis approach for adversarially robust MNIST classification. In particular, as illustrated in Figure 1, class-conditional variational auto-encoders (i.e., one variational auto-encoder per class) are learned. The respective recognition models, i.e., encoders, are discarded. For classification, the optimization problem
$l_y^*(x) = \max_z \log p(x|z) - \text{KL}(\mathcal{N}(z, \sigma I)|\mathcal{N}(0,1))$
is solved for each class $z$. Here, $p(x|z)$ represents the learned generative model. The optimization problem leads a latent code $z$ corresponding to the best reconstruction of the input. The corresponding likelihood can be used for classificaiton using Bayes’ theorem. The obtained posteriors $p(y|x)$ are then scaled using a modified softmax (see paper) to obtain the final decision. (Additionally, input binarization is used as defense.)
https://i.imgur.com/ignvoHQ.png
Figure 1: The proposed analysis by synthesis approach to MNIST classification. The depicted generators are taken from class-specific variational auto-encoders.
In addition to the proposed defense, Schott et al. also derive lower and upper bounds on the robustness of the classification procedure. These bounds can be derived from the optimization problem above, see the paper for details.
In experiments, they show that their defense outperforms state-of-the-art adversarial training and allows to estimate tight bounds. In addition, the method is robust against distal adversarial examples and the adversarial examples look more meaningful, see Figure 2.
https://i.imgur.com/uxGzzg1.png
Figure 2: Adversarial examples for the proposed “ABS” method, its binary variant and related work.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Towards the first adversarially robust neural network model on MNIST
Lukas Schott
and
Jonas Rauber
and
Matthias Bethge
and
Wieland Brendel
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.CV
First published: 2018/05/23 (1 year ago) Abstract: Despite much effort, deep neural networks remain highly susceptible to tiny
input perturbations and even for MNIST, one of the most common toy datasets in
computer vision, no neural network model exists for which adversarial
perturbations are large and make semantic sense to humans. We show that even
the widely recognized and by far most successful defense by Madry et al. (1)
overfits on the L-infinity metric (it's highly susceptible to L2 and L0
perturbations), (2) classifies unrecognizable images with high certainty, (3)
performs not much better than simple input binarization and (4) features
adversarial perturbations that make little sense to humans. These results
suggest that MNIST is far from being solved in terms of adversarial robustness.
We present a novel robust classification model that performs analysis by
synthesis using learned class-conditional data distributions. We derive bounds
on the robustness and go to great length to empirically evaluate our model
using maximally effective adversarial attacks by (a) applying decision-based,
score-based, gradient-based and transfer-based attacks for several different Lp
norms, (b) by designing a new attack that exploits the structure of our
defended model and (c) by devising a novel decision-based attack that seeks to
minimize the number of perturbed pixels (L0). The results suggest that our
approach yields state-of-the-art robustness on MNIST against L0, L2 and
L-infinity perturbations and we demonstrate that most adversarial examples are
strongly perturbed towards the perceptual boundary between the original and the
adversarial class.