Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs
Paper summary The paper uses a subsampling-based method to speed up ratio matching training of RBMs on high-dimensional sparse binary data. The proposed approach is a simple adaptation of the method proposed by Dauphin et al. (2011) for denoising autoencoders. This paper develops an algorithm that can successfully train RBMs on very high dimensional but sparse input data, such as often arises in NLP problems. The algorithm adapts a previous method developed for denoising autoencoders for use with RBMs. The authors present extensive experimental results verifying that their method learns a good generative model; provides unbiased gradient estimates; attains a two order of magnitude speed up on large sparse problems relative to the standard implementation; and yields state of the art performance on a number of NLP tasks. They also document the curious result that using a biased version of their estimator in fact leads to better performance on the classification tasks they tested.
papers.nips.cc
scholar.google.com
Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs
Dauphin, Yann and Bengio, Yoshua
Neural Information Processing Systems Conference - 2013 via Bibsonomy
Keywords: dblp


Loading...
Your comment:


Short Science allows researchers to publish paper summaries that are voted on and ranked!
About