Multi-Context Attention for Human Pose Estimation

被引:407
作者
Chu, Xiao [1 ]
Yang, Wei [1 ]
Ouyang, Wanli [1 ,4 ]
Ma, Cheng [2 ]
Yuille, Alan L. [3 ]
Wang, Xiaogang [1 ]
机构
[1] Chinese Univ Hong Kong, Hong Kong, Hong Kong, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
[3] Johns Hopkins Univ, Baltimore, MD USA
[4] Univ Sydney, Sydney, NSW, Australia
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR.2017.601
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose to incorporate convolutional neural networks with a multi-context attention mechanism into an end-to-end framework for human pose estimation. We adopt stacked hourglass networks to generate attention maps from features at multiple resolutions with various semantics. The Conditional Random Field (CRF) is utilized to model the correlations among neighboring regions in the attention map. We further combine the holistic attention model, which focuses on the global consistency of the full human body, and the body part attention model, which focuses on detailed descriptions for different body parts. Hence our model has the ability to focus on different granularity from local salient regions to global semantic-consistent spaces. Additionally, we design novel Hourglass Residual Units (HRUs) to increase the receptive field of the network. These units are extensions of residual units with a side branch incorporating filters with larger receptive field, hence features with various scales are learned and combined within the HRUs. The effectiveness of the proposed multi-context attention mechanism and the hourglass residual units is evaluated on two widely used human pose estimation benchmarks. Our approach outperforms all existing methods on both benchmarks over all the body parts. Code has been made publicly available.
引用
收藏
页码:5669 / 5678
页数:10
相关论文
共 50 条
[1]  
[Anonymous], 2015, ICLR
[2]  
[Anonymous], 2016, CVPR
[3]  
[Anonymous], ECCV
[4]  
[Anonymous], 2015, CVPR
[5]  
[Anonymous], 2021, NEURAL NETW MACH
[6]  
[Anonymous], 2014, CVPR
[7]  
[Anonymous], 2016, ECCV
[8]  
[Anonymous], 2015, ICCV
[9]  
[Anonymous], 2016, ARXIV160303925
[10]  
[Anonymous], 2015, SHOW ATTEND TELL NEU