Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
Paper summary Su et al. present an extensive robustness study of 18 different ImageNet networks. Among these networks, popular architectures such as AlexNet, VGG, Inception or ResNet can be found. Their main result shows a trade-off between robustness accuracy. A possible explanation is that recent increases in accuracy are only possible when sacrificing network robustness. In particular, as shown in Figure 1, the robustness scales linearly in the logarithm of the classification error (note that Figure 1 shows accuracy instead). Here, robustness is measured in terms of the necessary distortion of Carlini&Wagner attacks to achieve a misclassification. However, it can also be seen, that the regressed line (red) mainly relies on the better robustness of AlexNet and VGG 16/19 compared to all other networks. Therefore, I find it questionable whether this trade-off generalizes to other tasks or deep learning in general. https://i.imgur.com/ss7EZwV.png Figure 1: $L_2$ pixel distortion of Carlini&Wagner attacks – as indicator for robustness – plotted against the top-1 accuracy on ImageNet for the 18 different architectures listed in the legend. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
doi.org
sci-hub
scholar.google.com
Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
Dong Su and Huan Zhang and Hongge Chen and Jinfeng Yi and Pin-Yu Chen and Yupeng Gao
Computer Vision – ECCV 2018 - 2018 via Local CrossRef
Keywords:


[link]
Summary by David Stutz 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and