Neural Architecture Search with Reinforcement Learning Neural Architecture Search with Reinforcement Learning
Paper summary Find a topology by reinforcement learning. They use REINFORCE from [Williams 1992]( ## Ideas * Structure and connectivity of a Neural Network can be represented by a variable-length string. * The RNN controller in Neural Architecture Search is auto-regressive, which means it predicts hyperparameters one a time, conditioned on previous predictions * policy gradient method to maximize the expected accuracy of the sampled architectures * In our experiments, the process of generating an architecture stops if the number of layers exceeds a certain value. ## Evaluation * Computer Vision - **CIFAR-10**: 3.65% error (State of the art are Dense-Nets with 3.46% error) * Language - **Penn Treebank**: a test set perplexity of 62.4 (3.6 perplexity better than the previous state-of-the-art) They had a Control Experiment "Comparison against Random Search" in which they showed that they are much better than a random exploration of the data. However, the paper lacks details how exactly the random search was implemented. ## Related Work * [Designing Neural Network Architectures using Reinforcement Learning]( ([summary](
Neural Architecture Search with Reinforcement Learning
Barret Zoph and Quoc V. Le
arXiv e-Print archive - 2016 via Local arXiv
Keywords: cs.LG, cs.AI, cs.NE


Summary from abhigenie92
Your comment:
Summary from Martin Thoma
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and