Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
Paper summary TLDR; The authors argue that the human visual cortex doesn't contain ultra-deep networks like ResNet's (100s or 1000s of layers), but that it does contain recurrent connections. The authors then explore ResNets with weight sharing and show how they are equivalent to unrolled standard RNNs with skip connections. The authors find that ResNets with weight sharing perform almost as well as ResNets without weight sharing, while needing drastically fewer parameters. Thus, they argue that the success of ultra-deep networks may actually stem from the fact that they can approximate recurrent computations.
arxiv.org
scholar.google.com
Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
Liao, Qianli and Poggio, Tomaso A.
arXiv e-Print archive - 2016 via Bibsonomy
Keywords: dblp


Loading...
Your comment:


Short Science allows researchers to publish paper summaries that are voted on and ranked!
About