Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Paper summary Liu et al. propose fine-pruning, a combination of weight pruning and fine-tuning to defend against backdoor attacks on neural networks. Specifically, they consider a setting where training is outsourced to a machine learning service; the attacker has access to the network and training set, however, any change in network architecture would be easily detected. Thus, the attacker tries to inject backdoors through data poisening. As defense against such attacks, the authors propose to identify and prune weights that are not used for the actual tasks but only for the backdoor inputs. This defense can then be combined with fine-tuning and, as shown in experiments, is able to make backdoor attacks less effective – even when considering an attacker aware of this defense. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
doi.org
sci-hub
scholar.google.com
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Liu, Kang and Dolan-Gavitt, Brendan and Garg, Siddharth
Springer RAID - 2018 via Local Bibsonomy
Keywords: dblp


[link]
Summary by David Stutz 2 weeks ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and