Deep High-Resolution Representation Learning for Human Pose Estimation Deep High-Resolution Representation Learning for Human Pose Estimation
Paper summary This paper is a top-down (i.e. requires person detection separately) pose estimation method with a focus on improving high-resolution representations (features) to make keypoint detection easier. During the training stage, this method utilizes annotated bounding boxes of person class to extract ground truth images and keypoints. The data augmentations include random rotation, random scale, flipping, and [half body augmentations]( (feeding upper or lower part of the body separately). Heatmap learning is performed in a typical for this task approach of applying L2 loss between predicted keypoint locations and ground truth locations (generated by applying 2D Gaussian with std = 1). During the inference stage, pre-trained object detector is used to provide bounding boxes. The final heatmap is obtained by averaging heatmaps obtained from the original and flipped images. The pixel location of the keypoint is determined by $argmax$ heatmap value with a quarter offset in the direction to the second-highest heatmap value. While the pipeline described in this paper is a common practice for pose estimation methods, this method can achieve better results by proposing a network design to extract better representations. This is done through having several parallel sub-networks of different resolutions (next one is half the size of the previous one) while repeatedly fusing branches between each other: The fusion process varies depending on the scale of the sub-network and its location in relation to others:
Deep High-Resolution Representation Learning for Human Pose Estimation
Sun, Ke and Xiao, Bin and Liu, Dong and Wang, Jingdong
Conference and Computer Vision and Pattern Recognition - 2019 via Local Bibsonomy
Keywords: dblp allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and