FreeMotion: MoCap-Free Human Motion Synthesis with Multimodal Large Language Models

被引:0
作者
Zhang, Zhikai [1 ,3 ]
Li, Yitang [1 ,3 ]
Huang, Haofeng [1 ]
Lin, Mingxian [3 ]
Yi, Li [1 ,2 ,3 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Shanghai AI Lab, Shanghai, Peoples R China
[3] Shanghai Qi Zhi Inst, Shanghai, Peoples R China
来源
COMPUTER VISION - ECCV 2024, PT XXIII | 2025年 / 15081卷
关键词
Human motion synthesis; Multimodal large language models; Physics-based character animation;
D O I
10.1007/978-3-031-73337-6_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human motion synthesis is a fundamental task in computer animation. Despite recent progress in this field utilizing deep learning and motion capture data, existing methods are always limited to specific motion categories, environments, and styles. This poor generalizability can be partially attributed to the difficulty and expense of collecting large-scale and high-quality motion data. At the same time, foundation models trained with internet-scale image and text data have demonstrated surprising world knowledge and reasoning ability for various downstream tasks. Utilizing these foundation models may help with human motion synthesis, which some recent works have superficially explored. However, these methods didn't fully unveil the foundation models' potential for this task and only support several simple actions and environments. In this paper, we for the first time, without any motion data, explore open-set human motion synthesis using natural language instructions as user control signals based on MLLMs across any motion task and environment. Our framework can be split into two stages: 1) sequential keyframe generation by utilizing MLLMs as a keyframe designer and animator; 2) motion filling between keyframes through interpolation and motion tracking. Our method can achieve general human motion synthesis for many downstream tasks. The promising results demonstrate the worth of mocap-free human motion synthesis aided by MLLMs and pave the way for future research.
引用
收藏
页码:403 / 421
页数:19
相关论文
共 50 条
[1]   Unpaired Motion Style Transfer from Video to Animation [J].
Aberman, Kfir ;
Weng, Yijia ;
Lischinski, Dani ;
Cohen-Or, Daniel ;
Chen, Baoquan .
ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04)
[2]  
Anil Rohan, 2023, arXiv
[3]  
[Anonymous], 2005, Open dynamics engine
[4]   ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters [J].
Bin Peng, Xue ;
Guo, Yunrong ;
Halper, Lina ;
Levine, Sergey ;
Fidler, Sanja .
ACM TRANSACTIONS ON GRAPHICS, 2022, 41 (04)
[5]  
Bommasani R., 2021, arXiv, DOI 10.48550/arXiv.2108.07258
[6]  
Brown TB, 2020, ADV NEUR IN, V33
[7]   Executing your Commands via Motion Diffusion in Latent Space [J].
Chen, Xin ;
Jiang, Biao ;
Liu, Wen ;
Huang, Zilong ;
Fu, Bin ;
Chen, Tao ;
Yu, Gang .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :18000-18010
[8]  
Chowdhery A, 2023, J MACH LEARN RES, V24
[9]  
Dong RP, 2024, Arxiv, DOI arXiv:2309.11499
[10]   SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning [J].
Fussell, Levi ;
Bergamin, Kevin ;
Holden, Daniel .
ACM TRANSACTIONS ON GRAPHICS, 2021, 40 (06)