A cross-feature interaction network for 3D human pose estimation

被引:0
作者
Peng, Jihua [1 ]
Zhou, Yanghong [3 ]
Mok, P. Y. [1 ,2 ,4 ,5 ]
机构
[1] Hong Kong Polytech Univ, Sch Fash & Text, Hong Kong, Peoples R China
[2] Lab Artificial Intelligence Design, Hong Kong, Peoples R China
[3] Hong Kong Polytech Univ, Res Ctr Text Future Fash, Hong Kong, Peoples R China
[4] Hong Kong Polytech Univ, Res Inst Sports Sci & Technol, Hong Kong, Peoples R China
[5] Hong Kong Univ Sci & Technol, Div Integrat Syst & Design, Hong Kong, Peoples R China
关键词
3D human pose estimation; graph convolutional network (GCN); self-attention; cross-attention;
D O I
10.1016/j.patrec.2025.01.016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The task of estimating 3D human poses from single monocular images is challenging because, unlike video sequences, single images can hardly provide any temporal information for the prediction. Most existing methods attempt to predict 3D poses by modeling the spatial dependencies inherent in the anatomical structure of the human skeleton, yet these methods fail to capture the complex local and global relationships that exist among various joints. To solve this problem, we propose a novel Cross-Feature Interaction Network to effectively model spatial correlations between body joints. Specifically, we exploit graph convolutional networks (GCNs) to learn the local features between neighboring joints and the self-attention structure to learn the global features among all joints. We then design a cross-feature interaction (CFI) module to facilitate cross-feature communications among the three different features, namely the local features, global features, and initial 2D pose features, aggregating them to form enhanced spatial representations of human pose. Furthermore, a novel graph-enhanced module (GraMLP) with parallel GCN and multi-layer perceptron is introduced to inject the skeletal knowledge of the human body into the final representation of 3D pose. Extensive experiments on two datasets (Human3.6M (Ionescu et al., 2013) and MPI-INF-3DHP (Mehta et al., 2017)) show the superior performance of our method in comparison to existing state-of-the-art (SOTA) models. The code and data are shared at https://github.com/JihuaPeng/CFI-3DHPE
引用
收藏
页码:175 / 181
页数:7
相关论文
共 44 条
[41]   Enhanced Prostate Zone Segmentation with Multi-timepoint MRI: A Cross-Attention Augmented 3D U-Net Approach [J].
Yahathugoda, Yovin Ransika ;
Prezzi, Davide ;
Ittichaiwong, Piyalitt ;
Goh, Vicky ;
Ourselin, Sebastien ;
Antonelli, Michela .
MEDICAL IMAGING 2025: IMAGE PROCESSING, 2025, 13406
[42]   SA-MVSNet: Self-attention-based multi-view stereo network for 3D reconstruction of images with weak texture [J].
Yang, Ronghao ;
Miao, Wang ;
Zhang, Zhenxin ;
Liu, Zhenlong ;
Li, Mubai ;
Lin, Bin .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131
[43]   Multi-Scale Cascaded With Cross-Attention Network-Based Deformation Vector Field Estimation for Motion-Compensated 4D-CBCT Reconstruction [J].
Yuan, Peng ;
Lyu, Fei ;
Gao, Zhiqiang ;
Yang, Chunfeng ;
Hu, Dianlin ;
Zhu, Jian ;
Wu, Zhan ;
Lyu, Tianling ;
Zhao, Wei ;
Dong, Jianmin ;
Chen, Yang .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2025, 11 :717-731
[44]   3D Non-Local Neural Network: A Non-Invasive Biomarker for Immunotherapy Treatment Outcome Prediction. Case-Study: Metastatic Urothelial Carcinoma [J].
Rundo, Francesco ;
Banna, Giuseppe Luigi ;
Prezzavento, Luca ;
Trenta, Francesca ;
Conoci, Sabrina ;
Battiato, Sebastiano .
JOURNAL OF IMAGING, 2020, 6 (12)