Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
Paper summary Saralajew et al. evaluate learning vector quantization (LVQ) approaches regarding their robustness against adversarial examples. In particular, they consider generalized LVQ where examples are classified based on their distance to the closest prototype of the same class and the closest prototype of another class. The prototypes are learned during training; I refer to the paper for details. Robustness is compared to adversarial training and evaluated against several attacks, including FGSM, DeepFool and Boundary – both white-box and black-box attacks. Regarding $L_\infty$, LVQ usually demonstrates poorer performance than adversarial training. Still, robustness seems to be higher than normally trained deep neural networks. One of the main explanations of the authors is that LVQ follows a max-margin approach; this max-margin idea seems to favor robust models. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
Sascha Saralajew and Lars Holdijk and Maike Rees and Thomas Villmann
arXiv e-Print archive - 2019 via Local arXiv
Keywords: cs.LG, cs.AI, cs.CV, stat.ML

more

[link]
Summary by David Stutz 1 month ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and