Adversarial Robustness: Softmax versus Openmax Adversarial Robustness: Softmax versus Openmax
Paper summary Rozsa et al. describe an adersarial attack against OpenMax [1] by directly targeting the logits. Specifically, they assume a network using OpenMax instead of a SoftMax layer to compute the final class probabilities. OpenMax allows “open-set” networks by also allowing to reject input samples. By directly targeting the logits of the trained network, i.e. iteratively pushing the logits in a target direction, it does not matter whether SoftMax or OpenMax layers are used on top, the network can be fooled in both cases. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
Adversarial Robustness: Softmax versus Openmax
Andras Rozsa and Manuel Günther and Terrance E. Boult
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.CV

more

Summary by David Stutz 3 weeks ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and