Conditional Random Fields as Recurrent Neural NetworksConditional Random Fields as Recurrent Neural NetworksZheng, Shuai and Jayasumana, Sadeep and Romera-Paredes, Bernardino and Vineet, Vibhav and Su, Zhizhong and Du, Dalong and Huang, Chang and Torr, Philip H. S.2015
Paper summarycubs#### Problem addressed:
Image Segmentation, Pixel labelling, Object recognition
#### Summary:
The authors approximate the CRF inference procedure using the mean field approximation. They use a specific set of unary and binary potentials. Each step in the mean field inference is modelled as a convolutional layer with appropriate filter sizes and channels. The mean field inference procedure requires multiple iterations (over time) to achieve convergence. This is exploited to model the whole procedure as CNN-RNN. The unary potentials and initial pixel labels are learnt using a FCN. The authors train the FCN and CNN-RNN separately and jointly and find that joint training gives the better performance of the two on the VOC2007 dataset.
#### Novelty:
Formulating the mean field CRF inference procedure as a combination of CNN and RNN. Joint training procedure of a fully convolutional network (FCN) + CRF as RNN to perform pixel labelling tasks
#### Drawbacks:
Does not scale with number of classes. No theoretical justification for success of joint training, only empirical justification
#### Datasets:
VOC2012, COCO
#### Additional remarks:
Presentation video available on cedar server
#### Resources:
http://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf
#### Presenter:
Bhargava U. Kota
#### Problem addressed:
Image Segmentation, Pixel labelling, Object recognition
#### Summary:
The authors approximate the CRF inference procedure using the mean field approximation. They use a specific set of unary and binary potentials. Each step in the mean field inference is modelled as a convolutional layer with appropriate filter sizes and channels. The mean field inference procedure requires multiple iterations (over time) to achieve convergence. This is exploited to model the whole procedure as CNN-RNN. The unary potentials and initial pixel labels are learnt using a FCN. The authors train the FCN and CNN-RNN separately and jointly and find that joint training gives the better performance of the two on the VOC2007 dataset.
#### Novelty:
Formulating the mean field CRF inference procedure as a combination of CNN and RNN. Joint training procedure of a fully convolutional network (FCN) + CRF as RNN to perform pixel labelling tasks
#### Drawbacks:
Does not scale with number of classes. No theoretical justification for success of joint training, only empirical justification
#### Datasets:
VOC2012, COCO
#### Additional remarks:
Presentation video available on cedar server
#### Resources:
http://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf
#### Presenter:
Bhargava U. Kota