Actor and Action Video Segmentation from a Sentence Actor and Action Video Segmentation from a Sentence
Paper summary This paper performs pixel-wise segmentation of the object of interest which is specified by a sentence. The model is composed of three main components: a **textual encoder**, a **video encoder**, and a **decoder**.https://i.imgur.com/gjbHNqs.png - **Textual encoder** is word2vec pre-trained model followed by 1D CNN. - **Video encoder** is a 3D CNN to obtain a visual representation of the video (can be combined with optical flow to obtain motion information). - **Decoder**. Given a sentence representation $T$ a separate filter $f^r = tanh(W^r_fT + b^r_f)$ is created to match each feature map in the video frame decoder and combined with visual features as $S^r_t = f^r * V^r_t$, for each $r$esolution at $t$imestep. The decoder is composed of sequence of transpose convolution layers to get the response map of the same size as the input video frame.
arxiv.org
arxiv-sanity.com
scholar.google.com
Actor and Action Video Segmentation from a Sentence
Kirill Gavrilyuk and Amir Ghodrati and Zhenyang Li and Cees G. M. Snoek
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.CV

more

Summary by Oleksandr Bailo 3 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and