Softmax GAN Softmax GAN
Paper summary _Objective:_ Replace the usual GAN loss with a softmax croos-entropy loss to stabilize GAN training. _Dataset:_ [CelebA]( ## Inner working: Linked to recent work such as WGAN or Loss-Sensitive GAN that focus on objective functions with non-vanishing gradients to avoid the situation where the discriminator `D` becomes too good and the gradient vanishes. Thus they first introduce two targets for the discriminator `D` and the generator `G`: [![screen shot 2017-04-24 at 6 18 11 pm](]( [![screen shot 2017-04-24 at 6 18 24 pm](]( And then the two new losses: [![screen shot 2017-04-24 at 6 19 50 pm](]( [![screen shot 2017-04-24 at 6 19 55 pm](]( ## Architecture: They use the DCGAN architecture and simply change the loss and remove the batch normalization and other empirical techniques used to stabilize training. They show that the soft-max GAN is still robust to training.
Softmax GAN
Min Lin
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.LG, cs.NE


Summary by Léo Paillier 3 months ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and