An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and Separation

被引:125
作者
Michelsanti, Daniel [1 ]
Tan, Zheng-Hua [1 ]
Zhang, Shi-Xiong [2 ]
Xu, Yong [2 ]
Yu, Meng [2 ]
Yu, Dong [2 ]
Jensen, Jesper [1 ,3 ]
机构
[1] Aalborg Univ, Dept Elect Syst, DK-9220 Aalborg, Denmark
[2] Tencent AI Lab, Bellevue, WA 98004 USA
[3] Oticon AS, DK-2765 Smorum, Denmark
关键词
Speech enhancement; Acoustics; Visualization; Task analysis; Deep learning; Microphones; Videos; Audio-visual processing; deep learning; sound source separation; speech enhancement; speech separation; speech synthesis; OBJECTIVE QUALITY; BINAURAL HEARING; COCKTAIL PARTY; INTELLIGIBILITY; DIFFERENTIATION; RECOGNITION; EXTRACTION; PREDICTION; NETWORKS; TRACKING;
D O I
10.1109/TASLP.2021.3066303
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Speech enhancement and speech separation are two related tasks, whose purpose is to extract either one or more target speech signals, respectively, from a mixture of sounds generated by several sources. Traditionally, these tasks have been tackled using signal processing and machine learning techniques applied to the available acoustic signals. Since the visual aspect of speech is essentially unaffected by the acoustic environment, visual information from the target speakers, such as lip movements and facial expressions, has also been used for speech enhancement and speech separation systems. In order to efficiently fuse acoustic and visual information, researchers have exploited the flexibility of data-driven approaches, specifically deep learning, achieving strong performance. The ceaseless proposal of a large number of techniques to extract features and fuse multimodal information has highlighted the need for an overview that comprehensively describes and discusses audio-visual speech enhancement and separation based on deep learning. In this paper, we provide a systematic survey of this research topic, focusing on the main elements that characterise the systems in the literature: acoustic features; visual features; deep learning methods; fusion techniques; training targets and objective functions. In addition, we review deep-learning-based methods for speech reconstruction from silent videos and audio-visual sound source separation for non-speech signals, since these methods can be more or less directly applied to audio-visual speech enhancement and separation. Finally, we survey commonly employed audio-visual speech datasets, given their central role in the development of data-driven approaches, and evaluation methods, because they are generally used to compare different systems and determine their performance.
引用
收藏
页码:1368 / 1396
页数:29
相关论文
共 292 条
  • [1] A. N. S. Institute, AM NAT STAND METH CA
  • [2] NTCD-TIMIT: A New Database and Baseline for Noise-robust Audio-visual Speech Recognition
    Abdelaziz, Ahmed Hussen
    [J]. 18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 3752 - 3756
  • [3] Novel Two-Stage Audiovisual Speech Filtering in Noisy Environments
    Abel, Andrew
    Hussain, Amir
    [J]. COGNITIVE COMPUTATION, 2014, 6 (02) : 200 - 217
  • [4] Adeel A., 2017, P INT WORKSH CHALL H, P61
  • [5] Adeel A., 2019, SPRINGER NAT HAZARDS
  • [6] Lip-Reading Driven Deep Learning Approach for Speech Enhancement
    Adeel, Ahsan
    Gogate, Mandar
    Hussain, Amir
    Whitmer, William M.
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2021, 5 (03): : 481 - 490
  • [7] Contextual deep learning-based audio-visual switching for speech enhancement in real-world environments
    Adeel, Ahsan
    Gogate, Mandar
    Hussain, Amir
    [J]. INFORMATION FUSION, 2020, 59 : 163 - 170
  • [8] A Novel Real-Time, Lightweight Chaotic-Encryption Scheme for Next-Generation Audio-Visual Hearing Aids
    Adeel, Ahsan
    Ahmad, Jawad
    Larijani, Hadi
    Hussain, Amir
    [J]. COGNITIVE COMPUTATION, 2020, 12 (03) : 589 - 601
  • [9] Afouras T., IN PRESS
  • [10] My lips are concealed: Audio-visual speech enhancement through obstructions
    Afouras, Triantafyllos
    Chung, Joon Son
    Zisserman, Andrew
    [J]. INTERSPEECH 2019, 2019, : 4295 - 4299