Bounding the Test Log-Likelihood of Generative Models Bounding the Test Log-Likelihood of Generative Models
Paper summary #### Problem addressed: Evaluation and comparison of generative models #### Summary: This paper improves upon an existing non parametric estimator by sampling from hidden variables instead of features. They present an unbiased estimator and prove it asymptotically converges to true distribution with number of samples. They also prove that the expected value of unbiased estimator is a lower bound on the true distribution. They also present a biased estimator with a different sampling scheme. They empirically validate their estimators using MNIST dataset on different generative models #### Novelty: Sampling from hidden space for non-parametric estimation #### Drawbacks: This method works only for models which have hidden variables. Application for deep networks is not clear. Procedure for sampling from hidden variables is not explicitly mentioned. Assumes that P(x|h) is easily calculated from the model #### Datasets: MNIST #### Resources: paper: http://arxiv.org/pdf/1311.6184v4.pdf #### Presenter: Bhargava U. Kota
arxiv.org
arxiv-sanity.com
scholar.google.com
Bounding the Test Log-Likelihood of Generative Models
Yoshua Bengio and Li Yao and Kyunghyun Cho
arXiv e-Print archive - 2013 via Local arXiv
Keywords: cs.LG

more

Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About