Analyzing the Robustness of Nearest Neighbors to Adversarial Examples Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Paper summary Wang et al. discuss the robustness of $k$-nearest neighbors against adversarial perturbations, providing both a theoretical analysis as well as a robust 1-nearest neighbor version. Specifically, for low $k$ it is shown that nearest neighbor is usually not robust. Here, robustness is judged in a distributional sense; so for fixed and low $k$, the lowest distance of any training sample to an adversarial sample tends to zero, even if the training set size increases. For $k \in \mathcal{O}(dn \log n)$, however, it is shown that $k$/nearest neighbor can be robust – the prove, showing where the $dn \log n$ comes from can be found in the paper. Finally, they propose a simple but robust $1$-nearest neighbor algorithm. The main idea is to remove samples from the training set that cause adversarial examples. In particular, a minimum distance between any two samples with different labels is enforced. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
scholar.google.com
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Wang, Yizhen and Jha, Somesh and Chaudhuri, Kamalika
arXiv e-Print archive - 2017 via Local Bibsonomy
Keywords: dblp


[link]
Summary by David Stutz 3 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and