Benchmarking Neural Network Robustness to Common Corruptions and Perturbations Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Paper summary Hendrycks and Dietterich propose ImageNet-C and ImageNet-P benchmarks for corruption and perturbation robustness evaluation. Both datasets come in various sizes, and corruptions always come in different difficulties. The used corruptions include many common, realistic noise types such as various types of blur and random noise, brightness changes and compression artifacts. ImageNet-P differs from ImageNet-C in that sequences of perturbations are generated. This means, for a specific perturbation type, 30 different frames are generated; thus, less corruption types in total are used. The remainder of the paper introduces various evaluation metrics; these are usually based on the fact that the label of the corrupted image did not change. Finally, they also highlight some approaches to obtain more “robust” models against these corruptions. The list includes a variant of histogram equalization that is used to normalize the input images, the use of multi-scale or feature aggregation architectures and, surprisingly, adversarial logit pairing. Examples of ImageNet-C images can be found in Figure 1. https://i.imgur.com/YRBOzrH.jpg Figure 1: Examples of images in ImageNet-C. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
openreview.net
scholar.google.com
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Hendrycks, Dan and Dietterich, Thomas G.
International Conference on Learning Representations - 2019 via Local Bibsonomy
Keywords: dblp


[link]
Summary by David Stutz 3 weeks ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and