Pairwise Body-Part Attention for Recognizing Human-Object Interactions

被引:91
作者
Fang, Hao-Shu [1 ]
Cao, Jinkun [1 ]
Tai, Yu-Wing [2 ]
Lu, Cewu [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Tencent YouTu Lab, Shanghai, Peoples R China
来源
COMPUTER VISION - ECCV 2018, PT X | 2018年 / 11214卷
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Human-object interactions; Body-part correlations; Attention model; ACTION RECOGNITION;
D O I
10.1007/978-3-030-01249-6_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In human-object interactions (HOI) recognition, conventional methods consider the human body as a whole and pay a uniform attention to the entire body region. They ignore the fact that normally, human interacts with an object by using some parts of the body. In this paper, we argue that different body parts should be paid with different attention in HOI recognition, and the correlations between different body parts should be further considered. This is because our body parts always work collaboratively. We propose a new pairwise body-part attention model which can learn to focus on crucial parts, and their correlations for HOI recognition. A novel attention based feature selection method and a feature representation scheme that can capture pairwise correlations between body parts are introduced in the model. Our proposed approach achieved 10% relative improvement (36.1mAP -> 39.9mAP) over the state-of-the-art results in HOI recognition on the HICO dataset. We will make our model and source codes publicly available.
引用
收藏
页码:52 / 68
页数:17
相关论文
共 56 条
[1]   Learning the semantics of object-action relations by observation [J].
Aksoy, Eren Erdal ;
Abramov, Alexey ;
Doerr, Johannes ;
Ning, Kejun ;
Dellen, Babette ;
Woergoetter, Florentin .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2011, 30 (10) :1229-1249
[2]   2D Human Pose Estimation: New Benchmark and State of the Art Analysis [J].
Andriluka, Mykhaylo ;
Pishchulin, Leonid ;
Gehler, Peter ;
Schiele, Bernt .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :3686-3693
[3]  
[Anonymous], 2008, CVPR
[4]  
[Anonymous], LECT NOTES COMPUT SC
[5]  
[Anonymous], 2014, arXiv
[6]  
[Anonymous], 2015, IEEETransactionsonPatternAnalysisandMachineIntelligence
[7]  
[Anonymous], 2016, CVPR
[8]  
[Anonymous], 2015, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2015.123
[9]  
[Anonymous], 2006, CVPR
[10]  
[Anonymous], 2015, ICCV