Large-Scale Unsupervised Audio Pre-Training for Video-to-Speech Synthesis

被引:0
作者
Kefalas, Triantafyllos [1 ]
Panagakis, Yannis [2 ,3 ]
Pantic, Maja [1 ]
机构
[1] Imperial Coll London, Dept Comp, London SW7 2AZ, England
[2] Natl & Kapodistrian Univ Athens, Dept Informat & Telecommun, Athens 16122, Greece
[3] Archimedes Res Unit, Maroussi 15125, Greece
基金
英国工程与自然科学研究理事会;
关键词
Video-to-speech; speech synthesis; generative adversarial networks (GANs); conformer; pre-training; RECOGNITION;
D O I
10.1109/TASLP.2024.3382500
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Video-to-speech synthesis is the task of reconstructing the speech signal from a silent video of a speaker. Previous approaches train on data from almost exclusively audio-visual datasets, i.e., every audio sample has a corresponding video sample. This precludes the use of abundant audio-only datasets which may not have a corresponding visual modality such as audiobooks, radio podcasts, and speech recognition datasets. In this paper we propose to train encoder-decoder models on more than 3,500 hours of audio data at 24 kHz, and then use the pre-trained decoders to initialize the audio decoders for the video-to-speech synthesis task. The pre-training step uses audio samples only and does not require labels or corresponding samples from other modalities (visual, text). We demonstrate that this improves the reconstructed speech and that it is an unexplored way to improve the quality of the generator in a cross-modal task while only requiring samples from one of the modalities. We conduct experiments using both raw audio and mel spectrograms as target outputs and benchmark our models with existing work.
引用
收藏
页码:2255 / 2268
页数:14
相关论文
共 101 条
  • [1] Deep Audio-Visual Speech Recognition
    Afouras, Triantafyllos
    Chung, Joon Son
    Senior, Andrew
    Vinyals, Oriol
    Zisserman, Andrew
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 8717 - 8727
  • [2] Afouras T, 2018, Arxiv, DOI arXiv:1809.00496
  • [3] Afouras T, 2018, INTERSPEECH, P3244
  • [4] Akbari H, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P2516, DOI 10.1109/ICASSP.2018.8461856
  • [5] Almajai I, 2016, INT CONF ACOUST SPEE, P2722, DOI 10.1109/ICASSP.2016.7472172
  • [6] [Anonymous], 2018, Real-time voice cloning
  • [7] [Anonymous], VGG-M face
  • [8] Ardila R, 2020, PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), P4218
  • [9] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [10] Hi-Fi Multi-Speaker English TTS Dataset
    Bakhturina, Evelina
    Lavrukhin, Vitaly
    Ginsburg, Boris
    Zhang, Yang
    [J]. INTERSPEECH 2021, 2021, : 2776 - 2780