Enhanced Attacks on Defensively Distilled Deep Neural Networks Enhanced Attacks on Defensively Distilled Deep Neural Networks
Paper summary Liu et al. propose a white-box attack against defensive distillation. In particular, the proposed attack combines the objective of the Carlini+Wagner attack [1] with a slightly different reparameterization to enforce an $L_\infty$-constraint on the perturbation. In experiments, defensive distillation is shown to no be robust. [1] Nicholas Carlini, David A. Wagner: Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy 2017: 39-57 Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
scholar.google.com
Enhanced Attacks on Defensively Distilled Deep Neural Networks
Liu, Yujia and Zhang, Weiming and Li, Shaohua and Yu, Nenghai
arXiv e-Print archive - 2017 via Local Bibsonomy
Keywords: dblp


[link]
Summary by David Stutz 2 weeks ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and