Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

被引:182
作者
von Marcard, T. [1 ]
Rosenhahn, B. [1 ]
Black, M. J. [2 ]
Pons-Moll, G. [2 ]
机构
[1] Leibniz Univ Hannover, Inst Informat Verarbeitung TNT, Hannover, Germany
[2] Max Planck Inst Intelligent Syst, Tubingen, Germany
关键词
MOTION CAPTURE; ANIMATION;
D O I
10.1111/cgf.13131
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.
引用
收藏
页码:349 / 360
页数:12
相关论文
共 50 条
  • [1] Reconstructing 3D human pose and shape from a single image and sparse IMUs
    Liao, Xianhua
    Zhuang, Jiayan
    Liu, Ze
    Dong, Jiayan
    Song, Kangkang
    Xiao, Jiangjian
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [2] Deep Inertial Poser: Learning to Reconstruct Human Pose from Sparse Inertial Measurements in Real Time
    Huang, Yinghao
    Kaufmann, Manuel
    Aksan, Emre
    Black, Michael J.
    Hilliges, Otmar
    Pons-Moll, Gerard
    SIGGRAPH ASIA'18: SIGGRAPH ASIA 2018 TECHNICAL PAPERS, 2018,
  • [3] Human Pose Estimation from Video and IMUs
    von Marcard, Timo
    Pons-Moll, Gerard
    Rosenhahn, Bodo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (08) : 1533 - 1547
  • [4] AvatarPose: Avatar-Guided 3D Pose Estimation of Close Human Interaction from Sparse Multi-view Videos
    Lu, Feichi
    Dong, Zijian
    Song, Jie
    Hilliges, Otmar
    COMPUTER VISION - ECCV 2024, PT LIII, 2025, 15111 : 215 - 233
  • [5] LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors
    Ren, Yiming
    Zhao, Chengfeng
    He, Yannan
    Cong, Peishan
    Liang, Han
    Yu, Jingyi
    Xu, Lan
    Ma, Yuexin
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2023, 29 (05) : 2337 - 2347
  • [6] A3GC-IP: Attention-oriented adjacency adaptive recurrent graph convolutions for human pose estimation from sparse inertial measurements
    Puchert, Patrik
    Ropinski, Timo
    COMPUTERS & GRAPHICS-UK, 2023, 117 : 96 - 104
  • [7] MobilePoser: Real-Time Full-Body Pose Estimation and 3D Human Translation from IMUs in Mobile Consumer Devices
    Xu, Vasco
    Gao, Chenfeng
    Hoffmann, Henry
    Ahuja, Karan
    PROCEEDINGS OF THE 37TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, USIT 2024, 2024,
  • [8] 3D Pictorial Structures for Human Pose Estimation with Supervoxels
    Schick, Alexander
    Stiefelhagen, Rainer
    2015 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2015, : 140 - 147
  • [9] 3D Pictorial Structures Revisited: Multiple Human Pose Estimation
    Belagiannis, Vasileios
    Amin, Sikandar
    Andriluka, Mykhaylo
    Schiele, Bernt
    Navab, Nassir
    Ilic, Slobodan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (10) : 1929 - 1942
  • [10] Part Segmentation of Visual Hull for 3D Human Pose Estimation
    Kanaujia, Atul
    Kittens, Nicholas
    Ramanathan, Narayanan
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2013, : 542 - 549