Real-time Expressive Avatar Animation Generation based on Monocular Videos

被引:5
|
作者
Song, Wenfeng [1 ]
Wang, Xianfei [1 ]
Gao, Yang [2 ]
Hao, Aimin [3 ,4 ]
Hou, Xia [1 ]
机构
[1] Beijing Informat Sci & Technol Univ, Comp Sch, Beijing, Peoples R China
[2] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing Adv Innovat Ctr Biomed Engn, Beijing, Peoples R China
[3] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Computing methodologies-Computer graphics-Animation-Motion capture; Human-centered computing-Human computer interaction (HCI)-Interactive systems and tools;
D O I
10.1109/ISMAR-Adjunct57072.2022.00092
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The technologies for generating real-time animated avatars are very useful in the fields of VR/AR animation and entertainment. Most of the existing studies, however, always require the technology of time-consuming motion capture at high cost. This paper proposes an efficient lightweight framework of dynamic avatar animation, which can generate all the facial expressions, gestures, and torso movements properly in real time. The entire technique is driven only by monocular camera videos. Specifically, the 3D posture and facial landmarks of the monocular videos can be calculated by using Blaze-pose key points in our proposed framework. Then, a novel adaptor mapping function is proposed to transform the kinematic topology into the rigid skeletons of avatars. Without the dependency of a high-cost motion capture instrument and also without the limitation of the topology, our approach produces avatar animations with a higher level of fidelity. Finally, animations, including lip movements, facial expressions, and limb motions, are generated in a unified framework, which allows our 3D virtual avatar to act exactly like a real person. We have conducted extensive experiments to demonstrate the efficacy of applications in real-time avatar-related research. Our project and software are publicly available for further research or practical use (https://github.com/xianfei/SysMocap/).
引用
收藏
页码:429 / 434
页数:6
相关论文
共 50 条
  • [21] RC-SMPL : Real-time Cumulative SMPL-based Avatar Body Generation
    Song, Hail
    Yoon, Boram
    Cho, Woojin
    Woo, Woontack
    2023 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, ISMAR, 2023, : 89 - 98
  • [22] Real-time light animation
    Sbert, M
    Szécsi, L
    Szirmay-Kalos, L
    COMPUTER GRAPHICS FORUM, 2004, 23 (03) : 291 - 299
  • [23] Advanced Real-time Animation
    Leeney, Mark
    Maloney, Darragh
    CGAMES'2006: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON COMPUTER GAMES: ARTIFICIAL INTELLIGENCE AND MOBILE SYSTEMS, 2006, : 151 - 156
  • [24] Real-time watercolor for animation
    Luft, T
    Deussen, O
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2006, 21 (02) : 159 - 165
  • [25] Real-Time Watercolor for Animation
    Thomas Luft
    Oliver Deussen
    Journal of Computer Science and Technology, 2006, 21 : 159 - 165
  • [26] Real-time garment animation based on mixed model
    Mao, Tianlu
    Xia, Shihong
    Zhu, Xiaolong
    Wang, Zhaoqi
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2010, 47 (01): : 8 - 15
  • [27] Real-Time Motion Recognition Based on Skeleton Animation
    Hong, Chen
    Xiao, Shuangjiu
    Tan, Zehong
    Lv, Jianchao
    2012 5TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING (CISP), 2012, : 1648 - 1652
  • [28] Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot
    Burga O.
    Villegas J.
    Ugarte W.
    Journal of Computing Science and Engineering, 2024, 18 (01) : 47 - 56
  • [29] A REAL-TIME 3D HEAD MESH MODELING AND EXPRESSIVE ARTICULATORY ANIMATION SYSTEM
    Yu, Jun
    Wang, Zeng-fu
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2946 - 2950
  • [30] Real-time face pose tracking and facial expression synthesizing for the animation of 3D avatar
    Chun, Junchul
    Kwon, Ohryun
    Min, Kyongpil
    Park, Peom
    TECHNOLOGIES FOR E-LEARNING AND DIGITAL ENTERTAINMENT, PROCEEDINGS, 2007, 4469 : 191 - +