On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples
Paper summary Sharif et al. study the effectiveness of $L_p$ norms for creating adversarial perturbations. In this context, their main discussion revolves around whether $L_p$ norms are sufficient and/or necessary for perceptual similarity. Their main conclusion is that $L_p$ norms are neither necessary nor sufficient to ensure perceptual similarity. For example, an adversarial example might be within a specific $L_p$ bal, but humans might still identify it as not similar enough to the originally attacked sample; on the other hand, there are also some imperceptible perturbations that usually extend beyond a reasonable $L_p$ ball. Such transformatons might for example include small rotations or translation. These findings are interesting because it indicates that our current model, or approximation, or perceptual similarity is not meaningful in all cases. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
doi.ieeecomputersociety.org
sci-hub
scholar.google.com
On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples
Sharif, Mahmood and Bauer, Lujo and Reiter, Michael K.
Conference and Computer Vision and Pattern Recognition - 2018 via Local Bibsonomy
Keywords: dblp


[link]
Summary by David Stutz 3 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and