Iterative Neural Autoregressive Distribution Estimator NADE-k Iterative Neural Autoregressive Distribution Estimator NADE-k
Paper summary #### Problem addressed: Fully visible Bayesian network learning #### Summary: This paper is very similar to the order agnostic NADE paper, it generalized the idea of order agnostic NADE and extended to k iterations. The difference between this work and the previous NADE work is: 1, instead of totally mask out the variables to compute, it instead provide the data mean for those variables; 2. mask is not supplied to the network; 3. it employed a walk-back like scheme, where the prediction is completed in k iterations. #### Novelty: It is a generalization of NADE models. #### Drawbacks: Training would be slow, and with large k, the challenge of training very deep net remains. #### Datasets: binary mnist, caltec-101 silhouettes #### Additional remarks: #### Resources: implementation is at https://github.com/yaoli/nade_k #### Presenter: Yingbo Zhou
papers.nips.cc
scholar.google.com
Iterative Neural Autoregressive Distribution Estimator NADE-k
Raiko, Tapani and Li, Yao and Cho, KyungHyun and Bengio, Yoshua
Neural Information Processing Systems Conference - 2014 via Local Bibsonomy
Keywords: dblp


Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About