RandomOut: Using a convolutional gradient norm to win The Filter LotteryRandomOut: Using a convolutional gradient norm to win The Filter LotteryCohen, Joseph Paul and Lo, Henry Z. and Ding, Wei2016
Paper summaryopenreviewThe paper introduces a heuristic which aims to revive "dead" units in neural networks with the ReLU activation. In such networks, units that are less useful may be abandoned during training because they no longer receive any gradient. This wastes capacity. The proposed heuristic is to detect when this happens and to reinitialize the units in question, so they get another shot at learning something useful.
The paper proposes an approach to re-set convolutional filters that are apparently not being trained well (and randomly reinitialize them) It proposes a criterion that is based on the gradients propagated to these filters.