SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
Paper summary While preserving accuracy, - Network architecture improvement decreases parameters 51X (240MB to 4.8MB). - By using Deep Compression, parameters shrinks more 10X more (4.8MB to 0.47MB). Even improves more accuracy for about 2% by using Simple Bypass (shortcut connection). They show insightful architectural design strategies; 1. Less 3x3 filters to decrease size, 2. Decrease input channels also to decrease size, 3. Downsample late to have larger activation maps to lead to higher accuracy. And great insights about CNN design space exploration by parametrize microarchitecture, - Squeeze Ratio to find good balance between weight size and accuracy. - 3x3 filter percentage to find enough number of it.
arxiv.org
scholar.google.com
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
Iandola, Forrest N. and Moskewicz, Matthew W. and Ashraf, Khalid and Han, Song and Dally, William J. and Keutzer, Kurt
arXiv e-Print archive - 2016 via Local Bibsonomy
Keywords: dblp


Summary by unmesh 2 years ago
Loading...
Your comment:
Summary by Martin Thoma 1 year ago
Loading...
Your comment:
Summary by Alexander Jung 9 months ago
Loading...
Your comment:
Summary by daisukelab 4 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and