Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling
Paper summary This paper presents a new method (the "covariance-controlled adaptive Langevin thermostat") for MCMC posterior sampling for Bayesian inference. Along the lines of previous work in scalable MCMC, this is a stochastic gradient sampling method. The presented method aims to decrease parameter-dependent noise (in order to speed-up convergence to the given invariant distribution of the Markov chain, and generate beneficial samples more efficiently), while maintaining the desired invariant distribution of the Markov chain. Similar to existing stochastic gradient MCMC methods, this method aims to find use in large-scale machine learning settings (i.e. Bayesian inference with large numbers of observations). Experiments on three models (a normal-gamma model, Bayesian logistic regression, and a discriminative restricted Boltzmann machine) aim to show that the presented method performs better than Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) \cite{10.1016/0370-2693(87)91197-X} and Stochastic Gradient Nose-Hoover Thermostat (SGNHT), two similar existing methods.
papers.nips.cc
scholar.google.com
Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling
Shang, Xiaocheng and Zhu, Zhanxing and Leimkuhler, Benedict J. and Storkey, Amos J.
Neural Information Processing Systems Conference - 2015 via Bibsonomy
Keywords: dblp


Loading...
Your comment:


Short Science allows researchers to publish paper summaries that are voted on and ranked!
About