Learning to Compose Words into Sentences with Reinforcement Learning Learning to Compose Words into Sentences with Reinforcement Learning
Paper summary The aim is to have the system discover a method for parsing that would benefit a downstream task. https://i.imgur.com/q57gGCz.png They construct a neural shift-reduce parser – as it’s moving through the sentence, it can either shift the word to the stack or reduce two words on top of the stack by combining them. A Tree-LSTM is used for composing the nodes recursively. The whole system is trained using reinforcement learning, based on an objective function of the downstream task. The model learns parse rules that are beneficial for that specific task, either without any prior knowledge of parsing or by initially training it to act as a regular parser.
arxiv.org
scholar.google.com
Learning to Compose Words into Sentences with Reinforcement Learning
Yogatama, Dani and Blunsom, Phil and Dyer, Chris and Grefenstette, Edward and Ling, Wang
arXiv e-Print archive - 2016 via Local Bibsonomy
Keywords: dblp


Summary by Marek Rei 8 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and