Deep learning-based expressive speech synthesis: a systematic review of approaches, challenges, and resources

被引:1
作者
Barakat, Huda [1 ]
Turk, Oytun [2 ]
Demiroglu, Cenk [2 ]
机构
[1] Ozyegin Univ, Dept Comp Sci, TR-34794 Istanbul, Turkiye
[2] Ozyegin Univ, Dept Elect & Elect Engn, TR-34794 Istanbul, Turkiye
关键词
Speech synthesis; Expressive speech; Emotional speech; Deep learning; EMOTIONAL EXPRESSIONS; STYLE; TEXT; MODEL; REPRESENTATIONS; NETWORK; QUALITY;
D O I
10.1186/s13636-024-00329-7
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Speech synthesis has made significant strides thanks to the transition from machine learning to deep learning models. Contemporary text-to-speech (TTS) models possess the capability to generate speech of exceptionally high quality, closely mimicking human speech. Nevertheless, given the wide array of applications now employing TTS models, mere high-quality speech generation is no longer sufficient. Present-day TTS models must also excel at producing expressive speech that can convey various speaking styles and emotions, akin to human speech. Consequently, researchers have concentrated their efforts on developing more efficient models for expressive speech synthesis in recent years. This paper presents a systematic review of the literature on expressive speech synthesis models published within the last 5 years, with a particular emphasis on approaches based on deep learning. We offer a comprehensive classification scheme for these models and provide concise descriptions of models falling into each category. Additionally, we summarize the principal challenges encountered in this research domain and outline the strategies employed to tackle these challenges as documented in the literature. In the Section 8, we pinpoint some research gaps in this field that necessitate further exploration. Our objective with this work is to give an all-encompassing overview of this hot research area to offer guidance to interested researchers and future endeavors in this field.
引用
收藏
页数:34
相关论文
共 177 条
  • [1] Alemi AA, 2019, Arxiv, DOI arXiv:1612.00410
  • [2] Aggarwal V, 2020, INT CONF ACOUST SPEE, P6179, DOI [10.1109/ICASSP40776.2020.9053678, 10.1109/icassp40776.2020.9053678]
  • [3] Akuzawa K, 2019, Arxiv, DOI [arXiv:1804.02135, DOI 10.48550/ARXIV.1804.02135, 10.48550/arXiv.1804.02135]
  • [4] An XC, 2019, 2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), P184, DOI [10.1109/asru46091.2019.9003859, 10.1109/ASRU46091.2019.9003859]
  • [5] Azab Mahmoud, 2019, P 23 C COMP NAT LANG, P99, DOI [10.18653/v1/K19-1010, DOI 10.18653/V1/K19-1010]
  • [6] Baevski A, 2020, ADV NEUR IN, V33
  • [7] Belghazi MI, 2018, PR MACH LEARN RES, V80
  • [8] Bishop CM., 1994, Mixture density networks
  • [9] REMEMBERING PICTURES - PLEASURE AND AROUSAL IN MEMORY
    BRADLEY, MM
    GREENWALD, MK
    PETRY, MC
    LANG, PJ
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 1992, 18 (02) : 379 - 390
  • [10] Brown TB, 2020, ADV NEUR IN, V33