Optimizing Agent Behavior over Long Time Scales by Transporting Value Optimizing Agent Behavior over Long Time Scales by Transporting Value
Paper summary This builds on the previous ["MERLIN"](https://arxiv.org/abs/1803.10760) paper. First they introduce the RMA agent, which is a simplified version of MERLIN which uses model based RL and long term memory. They give the agent long term memory by letting it choose to save and load the agent's working memory (represented by the LSTM's hidden state). Then they add credit assignment, similar to the RUDDER paper, to get the "Temporal Value Transport" (TVT) agent that can plan long term in the face of distractions. **The critical insight here is that they use the agent's memory access to decide on credit assignment**. So if the model uses a memory from 512 steps ago, that action from 512 steps ago gets lots of credit for the current reward. They use various tasks, for example a maze with a distracting task then a memory retrieval task. For example, after starting in a maze with, say, a yellow wall, the agent needs to collect apples. This serves as a distraction, ensuring the agent can recall memories even after distraction. At the end of the maze it needs to remember that initial color (e.g. yellow) in order to choose the exit of the correct color. They include performance graphs showing that memory or even better memory plus credit assignment are a significant help in this, and similar, tasks.
arxiv.org
scholar.google.com
Optimizing Agent Behavior over Long Time Scales by Transporting Value
Chia-Chun Hung and Timothy Lillicrap and Josh Abramson and Yan Wu and Mehdi Mirza and Federico Carnevale and Arun Ahuja and Greg Wayne
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.AI, cs.LG

more

Summary by wassname 1 month ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and