RandomOut: Using a convolutional gradient norm to win The Filter Lottery RandomOut: Using a convolutional gradient norm to win The Filter Lottery
Paper summary The paper introduces a heuristic which aims to revive "dead" units in neural networks with the ReLU activation. In such networks, units that are less useful may be abandoned during training because they no longer receive any gradient. This wastes capacity. The proposed heuristic is to detect when this happens and to reinitialize the units in question, so they get another shot at learning something useful. The paper proposes an approach to re-set convolutional filters that are apparently not being trained well (and randomly reinitialize them) It proposes a criterion that is based on the gradients propagated to these filters.
arxiv.org
scholar.google.com
RandomOut: Using a convolutional gradient norm to win The Filter Lottery
Cohen, Joseph Paul and Lo, Henry Z. and Ding, Wei
arXiv e-Print archive - 2016 via Bibsonomy
Keywords: dblp


Loading...
Your comment:
Loading...
This is a very short summary.

Your comment:
Loading...
Your comment:


Short Science allows researchers to publish paper summaries that are voted on and ranked!
About