Group Normalization Group Normalization
Paper summary Batch Normalization doesn't work well when using small batch sizes, which is often required for memory intensive tasks such as detection or segmentation, or memory intensive data such as 3D images, videos or high-res images. Group Normalization is a simple alternative that is independent of the batch size: ![image]( It works as BN, except with a different set of features for computing the mean and std: ![image]( The $\gamma$ and $\beta$ are learned per group and applied as usual: ![image]( A group is defined as a set of channels, and the mean and std is computed for that set of channels for one sample, as illustrated: ![image]( By default, there are 32 groups, but they show GN works well as long as there is more than one group but less than the number of channels. In term of experiments, they try on ImageNet classification, detection and segmentation in COCO, and video classification in Kinetics. The conclusion is that **GN results in the same performance no matter the batch size, and that performance is the same as BN in large batches.** The most impressive result is a 10% increase in accuracy on ImageNet with a batch size of 2 over BN. # Comments - This paper got an honorable mention at ECCV 2018. - I don't understand how it works at the entrance of the network, when there is only 1 or 3 channels. Are we just not supposed to put GN there? - Also, the number of channels tends to increase in the network, but the number of groups stays fixed. Should it scale with the number of channels? - They tested GN on many tasks, but mostly on ResNet. There was only one experiment on VGG-16, where they found no big difference with BN. For now I'm not convinced GN is useful outside of ResNet. Code:
Group Normalization
Yuxin Wu and Kaiming He
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.CV, cs.LG


Summary by CodyWild 1 year ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and