Language Generation with Recurrent Generative Adversarial Networks without Pre-trainingLanguage Generation with Recurrent Generative Adversarial Networks without Pre-trainingPress, Ofir and Bar, Amir and Bogin, Ben and Berant, Jonathan and Wolf, Lior2017
Paper summaryofirpressThis paper shows how to train a character level RNN to generate text using only the GAN objective (reinforcement learning and the maximum-likelihood objective are not used).
The baseline WGAN is made up of:
* A recurrent **generator** that first embeds the previously omitted token, inputs this into a GRU, which outputs a state that is then transformed into a distribution over the character vocabulary (which represents the model's belief about the next output token).
* A recurrent **discriminator** that embeds each input token and then feeds them into a GRU. A linear transformation is used on the final hidden state in order to give a "score" to the input (a correctly-trained discriminator should give a high score to real sequences of text and a low score to fake ones).
The paper shows that if you try to train this baseline model to generate sequences of length 32 it just wont work (only gibberish is generated).
In order to get the model to work, the baseline model is augmented in three different ways:
1. **Curriculum Learning**: At first the generator has to generate sequences of length 1 and the discriminator only trains on real and generated sequences of length 1. After a while, the models moves on to sequences of length 2, and then 3, and so on, until we reach length 32.
2. **Teacher Helping**: In GANs the problem is usually that the generator is too weak. In order to help it, this paper proposes a method in which at stage $i$ in the curriculum, when the generator should generate sequences of length $i$, we feed it a real sequence of length $i-1$ and ask it to just generate 1 character more.
3. **Variable Lengths**: In each stage $i$ in the curriculum learning process, we generate and discriminate sequences of length $k$, for each $ 1 \leq k \leq i$ in each batch (instead of just generating and discriminating sequences of length exactly $i$).
Language Generation with Recurrent Generative Adversarial Networks without Pre-training
arXiv e-Print archive - 2017 via Local Bibsonomy