Adversarial Attacks on Neural Network Policies Adversarial Attacks on Neural Network Policies
Paper summary Huang et al. study adversarial attacks on reinforcement learning policies. One of the main problems, in contrast to supervised learning, is that there might not be a reward in any time step, meaning there is no clear objective to use. However, this is essential when crafting adversarial examples as they are mostly based on maximizing the training loss. To avoid this problem, Huang et al. assume a well-trained policy; the policy is expected to output a distribution over actions. Then, adversarial examples can be computed by maximizing the cross-entropy loss using the most-likely action as ground truth.
arxiv.org
arxiv-sanity.com
scholar.google.com
Adversarial Attacks on Neural Network Policies
Sandy Huang and Nicolas Papernot and Ian Goodfellow and Yan Duan and Pieter Abbeel
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.LG, cs.CR, stat.ML

more

Summary by David Stutz 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and