Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Paper summary Akhtar and Mian present a comprehensive survey of attacks and defenses of deep neural networks, specifically in computer vision. Published on ArXiv in January 2018, but probably written prior to August 2017, the survey includes recent attacks and defenses. For example, Table 1 presents an overview of attacks on deep neural networks – categorized by knowledge, target and perturbation measure. The authors also provide a strength measure – in the form of a 1-5 start “rating”. Personally, however, I see this rating critically – many of the attacks have not been studies extensively (across a wide variety of defense mechanisms, tasks and datasets). In comparison to the related survey [1], their overview is slightly less detailed – the attacks, for example are described in less mathematical detail and the categorization in Table 1 is less comprehensive. https://i.imgur.com/cdAcivj.png Table 1: Overview of the discussed attacks on deep neural networks. [1] Xiaoyong Yuan, Pan He, Qile Zhu, Rajendra Rana Bhat, Xiaolin Li: Adversarial Examples: Attacks and Defenses for Deep Learning. CoRR abs/1712.07107 (2017) Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar and Ajmal Mian
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.CV

more

Summary by David Stutz 1 month ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and