Visualizing the Loss Landscape of Neural Nets Visualizing the Loss Landscape of Neural Nets
Paper summary - Presents a simple visualization method based on “filter normalization.” - Observed that __the deeper networks become, neural loss landscapes become more chaotic__; causes a dramatic drop in generalization error, and ultimately to a lack of trainability. - Observed that __skip connections promote flat minimizers and prevent the transition to chaotic behavior__; helps explain why skip connections are necessary for training extremely deep networks. - Quantitatively measures non-convexity. - Studies the visualization of SGD optimization trajectories.
arxiv.org
arxiv-sanity.com
scholar.google.com
Visualizing the Loss Landscape of Neural Nets
Hao Li and Zheng Xu and Gavin Taylor and Christoph Studer and Tom Goldstein
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.LG, cs.CV, stat.ML

more

Summary by CodyWild 2 weeks ago
Loading...
Your comment:
Summary by daisukelab 1 week ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and