The Limitations of Adversarial Training and the Blind-Spot Attack The Limitations of Adversarial Training and the Blind-Spot Attack
Paper summary Zhang et al. search for “blind spots” in the data distribution and show that blind spot test examples can be used to find adversarial examples easily. On MNIST, the data distribution is approximated using kernel density estimation were the distance metric is computed in dimensionality-reduced feature space (of an adversarially trained model). For dimensionality reduction, t-SNE is used. Blind spots are found by slightly shifting pixels or changing the gray value of the background. Based on these blind spots, adversarial examples can easily be found for MNIST and Fashion-MNIST. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
scholar.google.com
The Limitations of Adversarial Training and the Blind-Spot Attack
Zhang, Huan and Chen, Hongge and Song, Zhao and Boning, Duane S. and Dhillon, Inderjit S. and Hsieh, Cho-Jui
arXiv e-Print archive - 2019 via Local Bibsonomy
Keywords: dblp




ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and