MANUS: Markerless Grasp Capture using Articulated 3D Gaussians

被引:0
|
作者
Pokhariya, Chandradeep [1 ,2 ]
Shah, Ishaan Nikhil [1 ]
Xing, Angela [2 ]
Li, Zekun [2 ]
Chen, Kefan [2 ]
Sharma, Avinash [1 ]
Sridhar, Srinath [2 ]
机构
[1] IIIT Hyderabad, Hyderabad, India
[2] Brown Univ, Providence, RI USA
关键词
D O I
10.1109/CVPR52733.2024.00214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Understanding how we grasp objects with our hands has important applications in areas like robotics and mixed reality. However, this challenging problem requires accurate modeling of the contact between hands and objects. To capture grasps, existing methods use skeletons, meshes, or parametric models that does not represent hand shape accurately resulting in inaccurate contacts. We present MANUS, a method for Markerless Hand-Object Grasp Capture using Articulated 3D Gaussians. We build a novel articulated 3D Gaussians representation that extends 3D Gaussian splatting [29] for high-fidelity representation of articulating hands. Since our representation uses Gaussian primitives optimized from the multi-view pixel-aligned losses, it enables us to efficiently and accurately estimate contacts between the hand and the object. For the most accurate results, our method requires tens of camera views that current datasets do not provide. We therefore build MANUS-Grasps, a new dataset that contains hand-object grasps viewed from 50+ cameras across 30+ scenes, 3 subjects, and comprising over 7M frames. In addition to extensive qualitative results, we also show that our method outperforms others on a quantitative contact evaluation method that uses paint transfer from the object to the hand.
引用
收藏
页码:2197 / 2208
页数:12
相关论文
共 50 条
  • [21] A Technical Demonstration on Streamlining 3D Motion ProductionWorkflows with a Markerless Motion Capture System
    Guo, Mengyao
    Nie, Yu
    Gao, Ze
    PROCEEDINGS OF THE 16TH CONFERENCE ON CREATIVITY AND COGNITION, C&C 2024, 2024, : 566 - 570
  • [22] Superimposing 3D virtual objects using markerless tracking
    Park, Sang-Cheol
    Lee, Sang-Woong
    Lee, Seong-Whan
    18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 3, PROCEEDINGS, 2006, : 897 - +
  • [23] Markerless 3D Face Tracking
    Walder, Christian
    Breidt, Martin
    Buelthoff, Heinrich
    Schoelkopf, Bernhard
    Curio, Cristobal
    PATTERN RECOGNITION, PROCEEDINGS, 2009, 5748 : 41 - 50
  • [24] Validation of a 3D Markerless Motion Capture Tool Using Multiple Pose and Depth Estimations for Quantitative Gait Analysis
    D'Haene, Mathis
    Chorin, Frederic
    Colson, Serge S.
    Guerin, Olivier
    Zory, Raphael
    Piche, Elodie
    SENSORS, 2024, 24 (22)
  • [25] DGD: Dynamic 3D Gaussians Distillation
    Labe, Isaac
    Issachar, Noam
    Lange, Itai
    Benaim, Sagie
    COMPUTER VISION - ECCV 2024, PT XXXVII, 2025, 15095 : 361 - 378
  • [26] Evaluation of 3D Markerless Motion Capture System Accuracy during Skate Skiing on a Treadmill
    Torvinen, Petra
    Ruotsalainen, Keijo S.
    Zhao, Shuang
    Cronin, Neil
    Ohtonen, Olli
    Linnamo, Vesa
    BIOENGINEERING-BASEL, 2024, 11 (02):
  • [27] PHOTOGRAMMETRIC MONITORING OF GRAVITATIONAL MASS MOVEMENTS IN ALPINE REGIONS BY MARKERLESS 3D MOTION CAPTURE
    Lucks, Lukas
    Hirt, Philipp-Roman
    Hoegner, Ludwig
    Stilla, Uwe
    XXIV ISPRS CONGRESS IMAGING TODAY, FORESEEING TOMORROW, COMMISSION II, 2022, 43-B2 : 1063 - 1069
  • [28] Rethinking Pose in 3D: Multi-stage Refinement and Recovery for Markerless Motion Capture
    Tome, Denis
    Toso, Matteo
    Agapito, Lourdes
    Russell, Chris
    2018 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2018, : 474 - 483
  • [29] CONCURRENT ANALYSIS OF KNEE KINEMATICS BETWEEN THEIA 3D MARKERLESS AND OPENCAP, AN OPEN-SOURCE MARKERLESS MOTION CAPTURE SYSTEM
    Gafoor, Fatima
    Ruder, Matthew
    Di Bacco, Vincenzo E.
    Kobsar, Dylan
    Malek, Monica
    Madden, Kim
    OSTEOARTHRITIS AND CARTILAGE, 2024, 32 : S240 - S241
  • [30] Grasp Planning of 3D Objects Using Genetic Algorithm
    Zhang, Zichen
    Gu, Jason
    2012 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS (ICAL), 2012, : 646 - 651