Adversarial examples in the physical world Adversarial examples in the physical world
Paper summary Adversarial examples are datapoints that are designed to fool a classifier. For example, we can take an image that is classified correctly using a neural network, then backprop through the model to find which changes we need to make in order for it to be classified as something else. And these changes can be quite small, such that a human would hardly notice a difference. https://i.imgur.com/pkK570X.png Examples of adversarial images. In this paper, they show that much of this property holds even when the images are fed into the classifier from the real world – after being photographed with a cell phone camera. While the accuracy goes from 85.3% to 36.3% when adversarial modifications are applied on the source images, the performance still drops from 79.8% to 36.4% when the images are photographed. They also propose two modifications to the process of generating adversarial images – making it into a more gradual iterative process, and optimising for a specific adversarial class.
arxiv.org
arxiv-sanity.com
scholar.google.com
Adversarial examples in the physical world
Alexey Kurakin and Ian Goodfellow and Samy Bengio
arXiv e-Print archive - 2016 via Local arXiv
Keywords: cs.CV, cs.CR, cs.LG, stat.ML

more

Summary by Marek Rei 3 weeks ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and