FreeMotion: MoCap-Free Human Motion Synthesis with Multimodal Large Language Models

被引:0
作者
Zhang, Zhikai [1 ,3 ]
Li, Yitang [1 ,3 ]
Huang, Haofeng [1 ]
Lin, Mingxian [3 ]
Yi, Li [1 ,2 ,3 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Shanghai AI Lab, Shanghai, Peoples R China
[3] Shanghai Qi Zhi Inst, Shanghai, Peoples R China
来源
COMPUTER VISION - ECCV 2024, PT XXIII | 2025年 / 15081卷
关键词
Human motion synthesis; Multimodal large language models; Physics-based character animation;
D O I
10.1007/978-3-031-73337-6_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human motion synthesis is a fundamental task in computer animation. Despite recent progress in this field utilizing deep learning and motion capture data, existing methods are always limited to specific motion categories, environments, and styles. This poor generalizability can be partially attributed to the difficulty and expense of collecting large-scale and high-quality motion data. At the same time, foundation models trained with internet-scale image and text data have demonstrated surprising world knowledge and reasoning ability for various downstream tasks. Utilizing these foundation models may help with human motion synthesis, which some recent works have superficially explored. However, these methods didn't fully unveil the foundation models' potential for this task and only support several simple actions and environments. In this paper, we for the first time, without any motion data, explore open-set human motion synthesis using natural language instructions as user control signals based on MLLMs across any motion task and environment. Our framework can be split into two stages: 1) sequential keyframe generation by utilizing MLLMs as a keyframe designer and animator; 2) motion filling between keyframes through interpolation and motion tracking. Our method can achieve general human motion synthesis for many downstream tasks. The promising results demonstrate the worth of mocap-free human motion synthesis aided by MLLMs and pave the way for future research.
引用
收藏
页码:403 / 421
页数:19
相关论文
共 51 条
  • [1] Unpaired Motion Style Transfer from Video to Animation
    Aberman, Kfir
    Weng, Yijia
    Lischinski, Dani
    Cohen-Or, Daniel
    Chen, Baoquan
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04):
  • [2] [Anonymous], 2023, Gpt-4v(ision) system card
  • [3] ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters
    Bin Peng, Xue
    Guo, Yunrong
    Halper, Lina
    Levine, Sergey
    Fidler, Sanja
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2022, 41 (04):
  • [4] Bommasani R., 2021, arXiv
  • [5] Brown TB, 2020, ADV NEUR IN, V33
  • [6] Executing your Commands via Motion Diffusion in Latent Space
    Chen, Xin
    Jiang, Biao
    Liu, Wen
    Huang, Zilong
    Fu, Bin
    Chen, Tao
    Yu, Gang
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18000 - 18010
  • [7] Chowdhery A, 2023, J MACH LEARN RES, V24
  • [8] Dong RP, 2024, Arxiv, DOI arXiv:2309.11499
  • [9] SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning
    Fussell, Levi
    Bergamin, Kevin
    Holden, Daniel
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2021, 40 (06):
  • [10] GeminiTeam, 2024, arXiv