EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars

被引:2
作者
Drobyshev, Nikita [1 ]
Casademunt, Antoni Bigata [1 ]
Vougioukas, Konstantinos [1 ]
Landgraf, Zoe [1 ]
Petridis, Stavros [1 ]
Pantic, Maja [1 ]
机构
[1] Imperial Coll London, London, England
来源
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024 | 2024年
关键词
D O I
10.1109/CVPR52733.2024.00812
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Head avatars animated by visual signals have gained popularity, particularly in cross-driving synthesis where the driver differs from the animated character, a challenging but highly practical approach. The recently presented MegaPortraits model has demonstrated state-of-the-art results in this domain. We conduct a deep examination and evaluation of this model, with a particular focus on its latent space for facial expression descriptors, and uncover several limitations with its ability to express intense face motions. To address these limitations, we propose substantial changes in both training pipeline and model architecture, to introduce our EMOPortraits model, where we: Enhance the model's capability to faithfully support in-tense, asymmetric face expressions, setting a new state-of-the-art result in the emotion transfer task, surpassing previous methods in both metrics and quality. Incorporate speech-driven mode to our model, achieving top-tier performance in audio-driven facial animation, making it possible to drive source identity through diverse modalities, including visual signal, audio, or a blend of both. Furthermore, we propose a novel multi-view video dataset featuring a wide range of intense and asymmetric facial expressions, filling the gap with absence of such data in existing datasets.
引用
收藏
页码:8498 / 8507
页数:10
相关论文
共 40 条
[1]  
[Anonymous], 2020, EUR C COMP VIS, DOI DOI 10.23919/CCC50068.2020.9188583
[2]  
[Anonymous], 2018, COSFACE LARGE MARGIN
[3]   Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution [J].
Barsoum, Emad ;
Zhang, Cha ;
Ferrer, Cristian Canton ;
Zhang, Zhengyou .
ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, :279-283
[4]  
Bounareli Stella, 2023, HYPERREENACT ONE SHO
[5]  
Burkov Egor, 2020, 2020 IEEE CVF C COMP
[6]   CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset [J].
Cao, Houwei ;
Cooper, David G. ;
Keutmann, Michael K. ;
Gur, Ruben C. ;
Nenkova, Ani ;
Verma, Ragini .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (04) :377-390
[7]  
Cao Q, 2018, VGGFace2: A dataset for recognising faces across pose and age
[8]  
Chung JS, 2018, INTERSPEECH, P1086
[9]   Out of Time: Automated Lip Sync in the Wild [J].
Chung, Joon Son ;
Zisserman, Andrew .
COMPUTER VISION - ACCV 2016 WORKSHOPS, PT II, 2017, 10117 :251-263
[10]  
Doukas Michail Christos, 2021, HeadGAN: One-shot neural head synthesis and editing