On the State of the Art of Evaluation in Neural Language Models On the State of the Art of Evaluation in Neural Language Models
Paper summary Comparison of three recurrent architectures for language modelling: LSTMs, Recurrent Highway Networks and the NAS architecture. Each model goes through a substantial hyperparameter search, under the constraint that the total number of parameters is kept constant. They conclude that basic LSTMs still outperform other architectures and achieve state-of-the-art perplexities on two datasets.
arxiv.org
arxiv-sanity.com
scholar.google.com
On the State of the Art of Evaluation in Neural Language Models
Gábor Melis and Chris Dyer and Phil Blunsom
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.CL

more

Summary by Marek Rei 2 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and