Variational Neural Machine Translation Variational Neural Machine Translation
Paper summary They start with the neural machine translation model using alignment, by Bahdanau et al. (2014), and add an extra variational component. The authors use two neural variational components to model a distribution over latent variables z that captures the semantics of a sentence being translated. First, they model the posterior probability of z, conditioned on both input and output. Then they also model the prior of z, conditioned only on the input. During training, these two distributions are optimised to be similar using Kullback-Leibler distance, and during testing the prior is used. They report improvements on Chinese-English and English-German translation, compared to using the original encoder-decoder NMT framework.

Summary by Marek Rei 3 years ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and