Probabilistic Speech-Driven 3D Facial Motion Synthesis: New Benchmarks, Methods, and Applications

被引:1
作者
Yang, Karren D. [1 ]
Ranjan, Anurag [1 ]
Chang, Jen-Hao Rick [1 ]
Vemulapalli, Raviteja [1 ]
Tuzel, Oncel [1 ]
机构
[1] Apple, Cupertino, CA 95014 USA
来源
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2024年
关键词
D O I
10.1109/CVPR52733.2024.02577
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the task of animating 3D facial geometry from speech signal. Existing works are primarily deterministic, focusing on learning a one-to-one mapping from speech signal to 3D face meshes on small datasets with limited speakers. While these models can achieve high-quality lip articulation for speakers in the training set, they are unable to capture the full and diverse distribution of 3D facial motions that accompany speech in the real world. Importantly, the relationship between speech and facial motion is one-to-many, containing both inter-speaker and intraspeaker variations and necessitating a probabilistic approach. In this paper, we identify and address key challenges that have so far limited the development of probabilistic models: lack of datasets and metrics that are suitable for training and evaluating them, as well as the difficulty of designing a model that generates diverse results while remaining faithful to a strong conditioning signal as speech. We first propose large-scale benchmark datasets and metrics suitable for probabilistic modeling. Then, we demonstrate a probabilistic model that achieves both diversity and fidelity to speech, outperforming other methods across the proposed benchmarks. Finally, we show-case useful applications of probabilistic models trained on these large-scale datasets: we can generate diverse speech-driven 3D facial motion that matches unseen speaker styles extracted from reference clips; and our synthetic meshes can be used to improve the performance of downstream audio-visual models.
引用
收藏
页码:27284 / 27293
页数:10
相关论文
共 48 条
[1]  
Afouras Triantafyllos, 2018, ARXIV
[2]   Expressive Visual Text-To-Speech Using Active Appearance Models [J].
Anderson, Robert ;
Stenger, Bjoern ;
Wan, Vincent ;
Cipolla, Roberto .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :3382-3389
[3]  
[Anonymous], 2016, arXiv
[4]   Expressive speech-driven facial animation [J].
Cao, Y ;
Tien, WC ;
Faloutsos, P ;
Pighin, F .
ACM TRANSACTIONS ON GRAPHICS, 2005, 24 (04) :1283-1302
[5]  
Chen L., 2019, arXiv
[6]   Out of Time: Automated Lip Sync in the Wild [J].
Chung, Joon Son ;
Zisserman, Andrew .
COMPUTER VISION - ACCV 2016 WORKSHOPS, PT II, 2017, 10117 :251-263
[7]  
Chung Joon Son, 2017, arXiv
[8]  
Chung Joon Son, 2018, arXiv
[9]   Capture, Learning, and Synthesis of 3D Speaking Styles [J].
Cudeiro, Daniel ;
Bolkart, Timo ;
Laidlaw, Cassidy ;
Ranjan, Anurag ;
Black, Michael J. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10093-10103
[10]   ArcFace: Additive Angular Margin Loss for Deep Face Recognition [J].
Deng, Jiankang ;
Guo, Jia ;
Xue, Niannan ;
Zafeiriou, Stefanos .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4685-4694