Secure Kernel Machines against Evasion Attacks Secure Kernel Machines against Evasion Attacks
Paper summary Russu et al. discuss robustness of linear and non-linear kernel machines through regularization. In particular, they show that linear classifiers can easily be regularized to be robust. In fact, robustness against $L_\infty$-bounded adversarial examples can be achieved through $L_1$ regularization on the weights. More generally, robustness against $L_p$ attacks are countered by $L_q$ regularization of the weights, with $\frac{1}{p} + \frac{1}{q} = 1$. These insights are generalized to the case of non-linear kernel machines; I refer to the paper for details. Also find this summary at [](

Summary by David Stutz 1 year ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and