Bayesian Uncertainty Estimation for Batch Normalized Deep Networks Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
Paper summary Teye et al. show that neural networks with batch normalization can be used to give uncertainty estimates through Monte Carlo sampling. In particular, instead of using the test mode of batch normalization, where the statistics (mean and variance) of each batch normalization layer are fixed, these statistics are computed per batch, as in training mode. To this end, for a specific query image, random batches from the training set are sampled, and prediction uncertainty is estimated using Monte Carlo sampling to compute mean and variance. This is summarized in Algorithm 1, depicting the proposed Monte Carlo Batch Normalization method. In the paper, this approach is further interpreted as approximate inference in Bayesian models. https://i.imgur.com/nRdOvzs.jpg Algorithm 1: Monte Carlo approach for using batch normalization for uncertainty estimation. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
proceedings.mlr.press
scholar.google.com
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
Teye, Mattias and Azizpour, Hossein and Smith, Kevin
International Conference on Machine Learning - 2018 via Local Bibsonomy
Keywords: dblp


[link]
Summary by David Stutz 3 weeks ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and