Compositional Obverter Communication Learning From Raw Visual Input Compositional Obverter Communication Learning From Raw Visual Input
Paper summary This paper proposes a new training method for multi-agent communication settings. They show the following referential game: A speaker sees an image of a 3d rendered object and describes it to a listener. The listener sees a different image and must decide if it is the same object as described by the speaker (has the same color and shape). The game can only be completed successfully if a communication protocol emerges that can express the color and shape the speaker sees. The main contribution of the paper is the training algorithm. The speaker enumerates the message that would maximise its own understanding of the message given the image it sees (in a greedy way, symbol by symbol). The listener, given the image and the message, predicts a binary output and is trained using maximum likelihood given the correct answer. Only the listener is updating its parameters - therefore the speaker and listener change roles every number of rounds. They show that a compositional communication protocol has emerged and evaluate it using zero-shot tests. [Implemenation of this paper in pytorch](https://github.com/benbogin/obverter)
arxiv.org
scholar.google.com
Compositional Obverter Communication Learning From Raw Visual Input
Edward Choi and Angeliki Lazaridou and Nando de Freitas
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.AI, cs.CL, cs.LG, cs.NE

more

Summary by Ben Bogin 2 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and