LPIPS-AttnWav2Lip: Generic audio-driven lip synchronization for talking head generation in the wild

被引:3
作者
Chen, Zhipeng [1 ]
Wang, Xinheng [1 ]
Xie, Lun [1 ]
Yuan, Haijie [2 ]
Pan, Hang [3 ]
机构
[1] Univ Sci & Technol Beijing, Beijing 100083, Peoples R China
[2] Xiaoduo Intelligent Technol Beijing Co Ltd, Beijing 100094, Peoples R China
[3] Changzhi Univ, Dept Comp Sci, Changzhi 046011, Peoples R China
基金
北京市自然科学基金;
关键词
Audio-driven generation; Lip synthesis; LPIPS loss; Multimodal fusion; Talking head generation;
D O I
10.1016/j.specom.2023.103028
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Researchers have shown a growing interest in Audio -driven Talking Head Generation. The primary challenge in talking head generation is achieving audio-visual coherence between the lips and the audio, known as lip synchronization. This paper proposes a generic method, LPIPS-AttnWav2Lip, for reconstructing face images of any speaker based on audio. We used the U -Net architecture based on residual CBAM to better encode and fuse audio and visual modal information. Additionally, the semantic alignment module extends the receptive field of the generator network to obtain the spatial and channel information of the visual features efficiently; and match statistical information of visual features with audio latent vector to achieve the adjustment and injection of the audio content information to the visual information. To achieve exact lip synchronization and to generate realistic high -quality images, our approach adopts LPIPS Loss, which simulates human judgment of image quality and reduces instability possibility during the training process. The proposed method achieves outstanding performance in terms of lip synchronization accuracy and visual quality as demonstrated by subjective and objective evaluation results.
引用
收藏
页数:8
相关论文
共 33 条
[1]   Speaker-Independent Speech-Driven Visual Speech Synthesis using Domain-Adapted Acoustic Models [J].
Abdelaziz, Ahmed Hussen ;
Theobald, Barry-John ;
Binder, Justin ;
Fanelli, Gabriele ;
Dixon, Paul ;
Apostoloff, Nicholas ;
Weise, Thibaut ;
Kajareker, Sachin .
ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, :220-225
[2]  
Afouras T, 2018, Arxiv, DOI arXiv:1809.00496
[3]   Audio-Visual Face Reenactment [J].
Agarwal, Madhav ;
Mukhopadhyay, Rudrabha ;
Namboodiri, Vinay ;
Jawahar, C. V. .
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, :5167-5176
[4]   BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond [J].
Chan, Kelvin C. K. ;
Wang, Xintao ;
Yu, Ke ;
Dong, Chao ;
Loy, Chen Change .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :4945-4954
[5]   Hierarchical Cross-Modal Talking Face Generation with Dynamic Pixel-Wise Loss [J].
Chen, Lele ;
Maddox, Ross K. ;
Duan, Zhiyao ;
Xu, Chenliang .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7824-7833
[6]   VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild [J].
Cheng, Kun ;
Cun, Xiaodong ;
Zhang, Yong ;
Xia, Menghan ;
Yin, Fei ;
Zhu, Mingrui ;
Wang, Xuan ;
Wang, Jue ;
Wang, Nannan .
PROCEEDINGS SIGGRAPH ASIA 2022, 2022,
[7]  
Chi L., 2020, Adv Neural Inf Process Syst, V33, P4479, DOI DOI 10.5555/3495724.3496100
[8]   Out of Time: Automated Lip Sync in the Wild [J].
Chung, Joon Son ;
Zisserman, Andrew .
COMPUTER VISION - ACCV 2016 WORKSHOPS, PT II, 2017, 10117 :251-263
[9]   Lip Reading in the Wild [J].
Chung, Joon Son ;
Zisserman, Andrew .
COMPUTER VISION - ACCV 2016, PT II, 2017, 10112 :87-103
[10]   Capture, Learning, and Synthesis of 3D Speaking Styles [J].
Cudeiro, Daniel ;
Bolkart, Timo ;
Laidlaw, Cassidy ;
Ranjan, Anurag ;
Black, Michael J. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10093-10103