LEAPSE: Learning Environment Affordances for 3D Human Pose and Shape Estimation

被引:1
|
作者
Tian, Fangzheng [1 ]
Kim, Sungchan [1 ,2 ]
机构
[1] Jeonbuk Natl Univ, Dept Comp Sci & Artificial Intelligence, Jeonju Si 54896, South Korea
[2] Jeonbuk Natl Univ, Ctr Adv Image Informat Technol, Jeonju Si 54896, South Korea
关键词
Three-dimensional displays; Affordances; Shape; Estimation; Image reconstruction; Transformers; Task analysis; 3D human pose and shape estimation; environment affordances; non-parametric;
D O I
10.1109/TIP.2024.3393716
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We live in a 3D world where people interact with each other in the environment. Learning 3D posed humans therefore requires us to perceive and interpret these interactions. This paper proposes LEAPSE, a novel method that learns salient instance affordances for estimating a posed body from a single RGB image in a non-parametric manner. Existing methods mostly ignore the environment and estimate the human body independently from the surroundings. We capture the influences of non-contact and contact instances on a posed body as an adequate representation of the "environment affordances". The proposed method learns the global relationships between 3D joints, body mesh vertices, and salient instances as environment affordances on the human body. LEAPSE achieved state-of-the-art results on the 3DPW dataset with many affordance instances, and also demonstrated excellent performance on Human3.6M dataset. We further demonstrate the benefit of our method by showing that the performance of existing weak models can be significantly improved when combined with our environment affordance module.
引用
收藏
页码:3285 / 3300
页数:16
相关论文
共 50 条
  • [1] HYRE: Hybrid Regressor for 3D Human Pose and Shape Estimation
    Li, Wenhao
    Liu, Mengyuan
    Liu, Hong
    Ren, Bin
    Li, Xia
    You, Yingxuan
    Sebe, Nicu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 235 - 246
  • [2] Personalized Graph Generation for Monocular 3D Human Pose and Shape Estimation
    Hu, Junxing
    Zhang, Hongwen
    Wang, Yunlong
    Ren, Min
    Sun, Zhenan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (04) : 2399 - 2413
  • [3] HEMlets PoSh: Learning Part-Centric Heatmap Triplets for 3D Human Pose and Shape Estimation
    Zhou, Kun
    Han, Xiaoguang
    Jiang, Nianjuan
    Jia, Kui
    Lu, Jiangbo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (06) : 3000 - 3014
  • [4] Learning 3D Human Shape and Pose From Dense Body Parts
    Zhang, Hongwen
    Cao, Jie
    Lu, Guo
    Ouyang, Wanli
    Sun, Zhenan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (05) : 2610 - 2627
  • [5] Temporal Representation Learning on Monocular Videos for 3D Human Pose Estimation
    Honari, Sina
    Constantin, Victor
    Rhodin, Helge
    Salzmann, Mathieu
    Fua, Pascal
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 6415 - 6427
  • [6] Locally Connected Network for Monocular 3D Human Pose Estimation
    Ci, Hai
    Ma, Xiaoxuan
    Wang, Chunyu
    Wang, Yizhou
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (03) : 1429 - 1442
  • [7] Dual-Path Transformer for 3D Human Pose Estimation
    Zhou, Lu
    Chen, Yingying
    Wang, Jinqiao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3260 - 3270
  • [8] Joint Path Alignment Framework for 3D Human Pose and Shape Estimation From Video
    Hong, Ji Woo
    Yoon, Sunjae
    Kim, Junyeong
    Yoo, Chang D.
    IEEE ACCESS, 2023, 11 : 43267 - 43275
  • [9] PolarMesh: A Star-Convex 3D Shape Approximation for Object Pose Estimation
    Li, Fu
    Shugurov, Ivan
    Busam, Benjamin
    Li, Minglong
    Yang, Shaowu
    Ilic, Slobodan
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02): : 4416 - 4423
  • [10] SPMHand: Segmentation-Guided Progressive Multi-Path 3D Hand Pose and Shape Estimation
    Lu, Haofan
    Gou, Shuiping
    Li, Ruimin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6822 - 6833