Learning both Weights and Connections for Efficient Neural Networks Learning both Weights and Connections for Efficient Neural Networks
Paper summary This paper is about pruning a neural network to reduce the FLOPs and memory necessary to use it. This method reduces AlexNet parameters to 1/9 and VGG-16 to 1/13 of the original size. ## Receipt 1. Train a network 2. Prune network: For each weight $w$: if w < threshold, then w <- 0. 3. Train pruned network ## See also * [Optimal Brain Damage](http://www.shortscience.org/paper?bibtexKey=conf/nips/CunDS89)
arxiv.org
arxiv-sanity.com
scholar.google.com
Learning both Weights and Connections for Efficient Neural Networks
Song Han and Jeff Pool and John Tran and William J. Dally
arXiv e-Print archive - 2015 via Local arXiv
Keywords: cs.NE, cs.CV, cs.LG

more

Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About