Convex Learning with Invariances Convex Learning with Invariances
Paper summary Teo et al. propose a convex, robust learning framework allowing to integrate invariances into SVM training. In particular, they consider a set of valid transformations and define the cost of a training sample (i.e., pair of data and label) as the loss under the worst case transformation – this definition is very similar to robust optimization or adversarial training. Then, a convex upper bound on this cost is derived. Given, that the worst case transformation can be found efficiently, two different algorithms for minimizing this loss are proposed and discussed. The framework is evaluated on MNIST. However, considering error rate, the approach does not outperform data augmentation significantly (here, data augmentation means adding “virtual, i.e., transformed, samples to the training set). However, the proposed method seems to find sparser solutions, requiring less support vectors. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).

[link]
Summary by David Stutz 3 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and