Enhanced Attacks on Defensively Distilled Deep Neural NetworksEnhanced Attacks on Defensively Distilled Deep Neural NetworksLiu, Yujia and Zhang, Weiming and Li, Shaohua and Yu, Nenghai2017
Paper summarydavidstutzLiu et al. propose a white-box attack against defensive distillation. In particular, the proposed attack combines the objective of the Carlini+Wagner attack [1] with a slightly different reparameterization to enforce an $L_\infty$-constraint on the perturbation. In experiments, defensive distillation is shown to no be robust.
[1] Nicholas Carlini, David A. Wagner: Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy 2017: 39-57
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Enhanced Attacks on Defensively Distilled Deep Neural Networks
Liu, Yujia
and
Zhang, Weiming
and
Li, Shaohua
and
Yu, Nenghai
arXiv e-Print archive - 2017 via Local Bibsonomy
Keywords:
dblp
Liu et al. propose a white-box attack against defensive distillation. In particular, the proposed attack combines the objective of the Carlini+Wagner attack [1] with a slightly different reparameterization to enforce an $L_\infty$-constraint on the perturbation. In experiments, defensive distillation is shown to no be robust.
[1] Nicholas Carlini, David A. Wagner: Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy 2017: 39-57
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).