Rotation equivariant vector field networks Rotation equivariant vector field networks
Paper summary This work deals with rotation equivariant convolutional filters. The idea is that when you rotate an image you should not need to relearn new filters to deal with this rotation. First we can look at how convolutions typically handle rotation and how we would expect a rotation invariant solution to perform below: | | | | - | - | | | | | | | | The method computes all possible rotations of the filter which results in a list of activations where each element represents a different rotation. From this list the maximum is taken which results in a two dimensional output for every pixel (rotation, magnitude). This happens at the pixel level so the result is a vector field over the image. We can visualize their degree selection method with a figure from which determined the rotation of a building: We can also think of this approach as attention \cite{1409.0473} where they attend over the possible rotations to obtain a score for each possible rotation value to pass on. The network can learn to adjust the rotation value to be whatever value the later layers will need. ------------------------ Results on [Rotated MNIST]( show an impressive improvement in training speed and generalization error:
Rotation equivariant vector field networks
Diego Marcos and Michele Volpi and Nikos Komodakis and Devis Tuia
arXiv e-Print archive - 2016 via Local arXiv
Keywords: cs.CV


Summary by Joseph Paul Cohen 3 years ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and