The Limitations of Model Uncertainty in Adversarial Settings The Limitations of Model Uncertainty in Adversarial Settings
Paper summary Grosse et al. show that Gaussian Processes allow to reject some adversarial examples based on their confidence and uncertainty; however, attacks maximizing confidence and minimizing uncertainty are still successful. While some state-of-the-art adversarial examples seem to result in significantly different confidence and uncertainty estimates compared to benign examples, Gaussian Processes can still be fooled through particularly crafted adversarial examples. To this end, the confidence is explicitly maximized and, additionally, the uncertainty is constrained to not be larger than the uncertainty of the corresponding benign test example. In experiments, this attack is shown to successfully fool Gaussian Processes while resulting in imperceptible perturbations. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
scholar.google.com
The Limitations of Model Uncertainty in Adversarial Settings
Grosse, Kathrin and Pfaff, David and Smith, Michael T. and Backes, Michael
arXiv e-Print archive - 2018 via Local Bibsonomy
Keywords: dblp


[link]
Summary by David Stutz 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and