Certified Robustness to Adversarial Examples with Differential Privacy Certified Robustness to Adversarial Examples with Differential Privacy
Paper summary Lecuyer et al. propose a defense against adversarial examples based on differential privacy. Their main insight is that a differential private algorithm is also robust to slight perturbations. In practice, this amounts to injecting noise in some layer (or on the image directly) and using Monte Carlo estimation for computing the expected prediction. The approach is compared to adversarial training against the Carlini+Wagner attack. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lecuyer and Vaggelis Atlidakis and Roxana Geambasu and Daniel Hsu and Suman Jana
arXiv e-Print archive - 2018 via Local arXiv
Keywords: stat.ML, cs.AI, cs.CR, cs.LG

more

[link]
Summary by David Stutz 4 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and