SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
Paper summary GANs for images have made impressive progress in recent years, reaching ever-higher levels of subjective realism. It’s also interesting to think about domains where the GAN architecture is less of a good fit. An example of one such domain is natural language. As opposed to images, which are made of continuous pixel values, sentences are fundamentally sequences of discrete values: that is, words. In a GAN, when the discriminator makes its assessment of the realness of the image, the gradient for that assessment can be backpropagated through to the pixel level. The discriminator can say “move that pixel just a bit, and this other pixel just a bit, and then I’ll find the image more realistic”. However, there is no smoothly flowing continuous space of words, and, even if you use continuous embeddings of words, it’s still the case that if you tried to apply a small change to a embedding vector, you almost certainly wouldn’t end up with another word, you’d just be somewhere in the middle of nowhere in word space. In short: the discrete nature of language sequences doesn’t allow for gradient flow to propagate backwards through to the generator. The authors of this paper propose a solution: instead of trying to treat their GAN as one big differentiable system, they framed the problem of “generate a sequence that will seem realistic to the discriminator” as a reinforcement learning problem? After all, this property - of your reward just being generated *somewhere* in the environment, not something analytic, not something you can backprop through - is one of the key constraints of reinforcement learning. Here, the more real the discriminator finds your sequence, the higher the reward. One approach to RL, and the one this paper uses, is that of a policy network, where your parametrized network produces a distribution over actions. You can’t update your model to deterministically increase reward, but you can shift around probability in your policy such that your expected reward of following that policy is higher. This key kernel of an idea - GANs for language, but using a policy network framework to get around not having backprop-able loss/reward- gets you most of the way to understanding what these authors did, but it’s still useful to mechanically walk through specifics. https://i.imgur.com/CIFuGCG.png At each step, the “state” is the existing words in the sequence, and the agent’s “action” the choosing of its next word - The Discriminator can only be applied to completed sequences, since it's difficult to determine whether an incoherent half-sentence is realistic language. So, when the agent is trying to calculate the reward of an action at a state, it uses Monte Carlo Tree Search: randomly “rolling out” many possible futures by randomly sampling from the policy, and then taking the average Discriminator judgment of all those futures resulting from each action as being its expected reward - The Generator is a LSTM that produces a softmax over words, which can be interpreted as a policy if it’s sampled from randomly - One of the nice benefits of this approach is that it can work well for cases where we don't have a hand-crafted quality assessment metric, the way we have BLEU score for translation
arxiv.org
arxiv-sanity.com
scholar.google.com
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
Lantao Yu and Weinan Zhang and Jun Wang and Yong Yu
arXiv e-Print archive - 2016 via Local arXiv
Keywords: cs.LG, cs.AI

more

Summary by Jon Gauthier 2 years ago
Loading...
That link for the image doesn't work for me (permissions on dropbox?). You can embed images just by writing a url that ends in a .png or .jpg. Or you can wrap the url in the ![](url) markdown syntax to render them. Like this: ![](https://i.imgur.com/hDvHRwT.png)

Fixed! Thanks.

Hi! I am not able to get what is the oracle model?

your 2. the discriminator is a CNN, not a RNN. Do you think a CNN could also be used as a generator?

Your comment:
Summary by Denny Britz 1 year ago
Loading...
Your comment:
Summary by CodyWild 1 month ago
Loading...
shameless plug: don't use SeqGAN, just reduce the softmax temperature of an MLE trained model https://arxiv.org/abs/1811.02549 . Better and easier!

Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and