Explaining and Harnessing Adversarial Examples Explaining and Harnessing Adversarial Examples
Paper summary #### Problem addressed: A fast way of finding adversarial examples, and a hypothesis for the adversarial examples #### Summary: This paper tries to explain why adversarial examples exists, the adversarial example is defined in another paper \cite{arxiv.org/abs/1312.6199}. The adversarial example is kind of counter intuitive because they normally are visually indistinguishable from the original example, but leads to very different predictions for the classifier. For example, let sample $x$ be associated with the true class $t$. A classifier (in particular a well trained dnn) can correctly predict $x$ with high confidence, but with a small perturbation $r$, the same network will predict $x+r$ to a different incorrect class also with high confidence. This paper explains that the exsistence of such adversarial examples is more because of low model capacity in high dimensional spaces rather than overfitting, and got some empirical support on that. It also shows a new method that can reliably generate adversarial examples really fast using `fast sign' method. Basically, one can generate an adversarial example by taking a small step toward the sign direction of the objective. They also showed that training along with adversarial examples helps the classifier to generalize. #### Novelty: A fast method to generate adversarial examples reliably, and a linear hypothesis for those examples. #### Datasets: MNIST #### Resources: Talk of the paper https://www.youtube.com/watch?v=Pq4A2mPCB0Y #### Presenter: Yingbo Zhou
arxiv.org
arxiv-sanity.com
scholar.google.com
Explaining and Harnessing Adversarial Examples
Ian J. Goodfellow and Jonathon Shlens and Christian Szegedy
arXiv e-Print archive - 2014 via Local arXiv
Keywords: stat.ML, cs.LG

more

Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About