Dual-Path Transformer for 3D Human Pose Estimation

被引:9
作者
Zhou, Lu [1 ]
Chen, Yingying [1 ]
Wang, Jinqiao [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Inst Automat, Fdn Model Res Ctr, Beijing 100190, Peoples R China
[2] Wuhan AI Res, Wuhan 430073, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518066, Peoples R China
关键词
Transformers; Three-dimensional displays; Pose estimation; Task analysis; Solid modeling; Feature extraction; Benchmark testing; 3D human pose estimation; transformer; motion; distillation;
D O I
10.1109/TCSVT.2023.3318557
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Video-based 3D human pose estimation has achieved great progress, however, it is still difficult to learn precise 2D-3D projection under some hard cases. Multi-level human knowledge and motion information serve as two key elements in the field to conquer the challenges caused by various factors, where the former encodes various human structure information spatially and the latter captures the motion change temporally. Inspired by this, we propose a DualFormer (dual-path transformer) network which encodes multiple human contexts and motion detail to perform the spatial-temporal modeling. Firstly, motion information which depicts the movement change of human body is embedded to provide explicit motion prior for the transformer module. Secondly, a dual-path transformer framework is proposed to model long-range dependencies of both joint sequence and limb sequence. Parallel context embedding is performed initially and a cross transformer block is then appended to promote the interaction of the dual paths which improves the feature robustness greatly. Specifically, predictions of multiple levels can be acquired simultaneously. Lastly, we employ the weighted distillation technique to accelerate the convergence of the dual-path framework. We conduct extensive experiments on three different benchmarks, i.e., Human 3.6M, MPI-INF-3DHP and HumanEva-I. We mainly compute the MPJPE, P-MPJPE, PCK and AUC to evaluate the effectiveness of proposed approach and our work achieves competitive results compared with state-of-the-art approaches. Specifically, the MPJPE is reduced to 42.8mm which is 1.5mm lower than PoseFormer on Human3.6M, which proves the efficacy of the proposed approach.
引用
收藏
页码:3260 / 3270
页数:11
相关论文
共 66 条
[11]  
Gui LY, 2018, IEEE INT C INT ROBOT, P562, DOI 10.1109/IROS.2018.8594452
[12]   Online Knowledge Distillation via Collaborative Learning [J].
Guo, Qiushan ;
Wang, Xinjiang ;
Wu, Yichao ;
Yu, Zhipeng ;
Liang, Ding ;
Hu, Xiaolin ;
Luo, Ping .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11017-11026
[13]  
He KM, 2017, IEEE I CONF COMP VIS, P2980, DOI [10.1109/ICCV.2017.322, 10.1109/TPAMI.2018.2844175]
[14]  
Hinton G, 2015, Arxiv, DOI [arXiv:1503.02531, DOI 10.48550/ARXIV.1503.02531]
[15]   Exploiting Temporal Information for 3D Human Pose Estimation [J].
Hossain, Mir Rayat Imtiaz ;
Little, James J. .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :69-86
[16]   Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments [J].
Ionescu, Catalin ;
Papava, Dragos ;
Olaru, Vlad ;
Sminchisescu, Cristian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (07) :1325-1339
[17]   Propagating LSTM: 3D Pose Estimation Based on Joint Interdependency [J].
Lee, Kyoungoh ;
Lee, Inwoong ;
Lee, Sanghoon .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :123-141
[18]   Pose Recognition with Cascade Transformers [J].
Li, Ke ;
Wang, Shijie ;
Zhang, Xiang ;
Xu, Yifan ;
Xu, Weijian ;
Tu, Zhuowen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :1944-1953
[19]   Cascaded Deep Monocular 3D Human Pose Estimation with Evolutionary Training Data [J].
Li, Shichao ;
Ke, Lei ;
Pratama, Kevin ;
Tai, Yu-Wing ;
Tang, Chi-Keung ;
Cheng, Kwang-Ting .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6172-6182
[20]   Exploiting Temporal Contexts With Strided Transformer for 3D Human Pose Estimation [J].
Li, Wenhao ;
Liu, Hong ;
Ding, Runwei ;
Liu, Mengyuan ;
Wang, Pichao ;
Yang, Wenming .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :1282-1293