Joint Training Deep Boltzmann Machines for Classification Joint Training Deep Boltzmann Machines for Classification
Paper summary The authors aim to introduce a new method for training deep Boltzmann machines. Inspired by inference procedure they turn the model into two hidden layers autoencoder with recurrent connections. Instead of reconstructing all pixels from all (perhaps corrupted) pixels they reconstruct one subset of pixels from the other (the complement). DBM are usually "pre-trained" in a layer-wise manner using RBMs, a conceivably suboptimal procedure. Here the authors propose to use a deterministic criterion that basically turns the DBM into a RNN. This RNN is trained with a loss that resembles that one of denoising auto-encoders (some inputs at random are missing and the task is to predict their values from the observed ones).
arxiv.org
scholar.google.com
Joint Training Deep Boltzmann Machines for Classification
Goodfellow, Ian J. and Courville, Aaron C. and Bengio, Yoshua
arXiv e-Print archive - 2013 via Bibsonomy
Keywords: dblp


Loading...
Your comment:


Short Science allows researchers to publish paper summaries that are voted on and ranked!
About