Hypercolumns for Object Segmentation and Fine-grained LocalizationHypercolumns for Object Segmentation and Fine-grained LocalizationBharath Hariharan and Pablo Arbeláez and Ross Girshick and Jitendra Malik2014
Paper summaryjoecohenSo the hypervector is just a big vector created from a network:
`"We concatenate features from some or all of the feature
maps in the network into one long vector for every location
which we call the hypercolumn at that location. As an
example, using pool2 (256 channels), conv4 (384 channels)
and fc7 (4096 channels) from the architecture of [28] would
lead to a 4736 dimensional vector."`
So how exactly do we construct the vector?
![](https://i.imgur.com/hDvHRwT.png)
Each activation map results in a single element of the resulting hypervector. The corresponding pixel location in each activation map is used as if the activation maps were all scaled to the size of the original image.
The paper shows the below formula for the calculation. Here $\mathbf{f}_i$ is the value of the pixel in the scaled space and each $\mathbf{F}_{k}$ are points in the activation map. $\alpha_{ik}$ scales the known values to produce the midway points.
$$\mathbf{f}_i = \sum_k \alpha_{ik} \mathbf{F}_{k}$$
Then the fully connected layers are simply appended to complete the vector.
So this gives us a representation for each pixel but is it a good one? The later layers will have the input pixel in their receptive field. After the first few layers it is expected that the spatial constraint is not strong.
First published: 2014/11/21 (4 years ago) Abstract: Recognition algorithms based on convolutional networks (CNNs) typically use
the output of the last layer as feature representation. However, the
information in this layer may be too coarse to allow precise localization. On
the contrary, earlier layers may be precise in localization but will not
capture semantics. To get the best of both worlds, we define the hypercolumn at
a pixel as the vector of activations of all CNN units above that pixel. Using
hypercolumns as pixel descriptors, we show results on three fine-grained
localization tasks: simultaneous detection and segmentation[22], where we
improve state-of-the-art from 49.7[22] mean AP^r to 60.0, keypoint
localization, where we get a 3.3 point boost over[20] and part labeling, where
we show a 6.6 point gain over a strong baseline.
So the hypervector is just a big vector created from a network:
`"We concatenate features from some or all of the feature
maps in the network into one long vector for every location
which we call the hypercolumn at that location. As an
example, using pool2 (256 channels), conv4 (384 channels)
and fc7 (4096 channels) from the architecture of [28] would
lead to a 4736 dimensional vector."`
So how exactly do we construct the vector?
![](https://i.imgur.com/hDvHRwT.png)
Each activation map results in a single element of the resulting hypervector. The corresponding pixel location in each activation map is used as if the activation maps were all scaled to the size of the original image.
The paper shows the below formula for the calculation. Here $\mathbf{f}_i$ is the value of the pixel in the scaled space and each $\mathbf{F}_{k}$ are points in the activation map. $\alpha_{ik}$ scales the known values to produce the midway points.
$$\mathbf{f}_i = \sum_k \alpha_{ik} \mathbf{F}_{k}$$
Then the fully connected layers are simply appended to complete the vector.
So this gives us a representation for each pixel but is it a good one? The later layers will have the input pixel in their receptive field. After the first few layers it is expected that the spatial constraint is not strong.