NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles
Paper summary Lu et al. present experiments regarding adversarial examples in the real world, i.e. after printing them. Personally, I find it interesting that researchers are studying how networks can be fooled by physically perturbing images. For me, one of the main conclusions it that it is very hard to evaluate the robustness of networks against physical perturbations. Often it is unclear whether changed lighting conditions, distances or viewpoints to objects might cause the network to fail – which means that the adversarial perturbation did not cause this failure. Also found this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
arxiv.org
arxiv-sanity.com
scholar.google.com
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles
Jiajun Lu and Hussein Sibai and Evan Fabry and David Forsyth
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.CV, cs.AI, cs.CR

more

Summary by David Stutz 1 month ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and