Deep learning-based expressive speech synthesis: a systematic review of approaches, challenges, and resources

被引:1
作者
Barakat, Huda [1 ]
Turk, Oytun [2 ]
Demiroglu, Cenk [2 ]
机构
[1] Ozyegin Univ, Dept Comp Sci, TR-34794 Istanbul, Turkiye
[2] Ozyegin Univ, Dept Elect & Elect Engn, TR-34794 Istanbul, Turkiye
关键词
Speech synthesis; Expressive speech; Emotional speech; Deep learning; EMOTIONAL EXPRESSIONS; STYLE; TEXT; MODEL; REPRESENTATIONS; NETWORK; QUALITY;
D O I
10.1186/s13636-024-00329-7
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Speech synthesis has made significant strides thanks to the transition from machine learning to deep learning models. Contemporary text-to-speech (TTS) models possess the capability to generate speech of exceptionally high quality, closely mimicking human speech. Nevertheless, given the wide array of applications now employing TTS models, mere high-quality speech generation is no longer sufficient. Present-day TTS models must also excel at producing expressive speech that can convey various speaking styles and emotions, akin to human speech. Consequently, researchers have concentrated their efforts on developing more efficient models for expressive speech synthesis in recent years. This paper presents a systematic review of the literature on expressive speech synthesis models published within the last 5 years, with a particular emphasis on approaches based on deep learning. We offer a comprehensive classification scheme for these models and provide concise descriptions of models falling into each category. Additionally, we summarize the principal challenges encountered in this research domain and outline the strategies employed to tackle these challenges as documented in the literature. In the Section 8, we pinpoint some research gaps in this field that necessitate further exploration. Our objective with this work is to give an all-encompassing overview of this hot research area to offer guidance to interested researchers and future endeavors in this field.
引用
收藏
页数:34
相关论文
共 177 条
[21]  
Dai XD, 2021, Arxiv, DOI arXiv:2108.01831
[22]  
Defossez Alexandre, 2022, arXiv
[23]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[24]  
Du C., 2021, arXiv preprint arXiv:2102.00851, V2021, P3136
[25]   Controllable speech synthesis by learning discrete phoneme-level prosodic representations [J].
Ellinas, Nikolaos ;
Christidou, Myrsini ;
Vioni, Alexandra ;
Sung, June Sig ;
Chalamandaris, Aimilios ;
Tsiakoulis, Pirros ;
Mastorocostas, Paris .
SPEECH COMMUNICATION, 2023, 146 :22-31
[26]  
Eyben F, 2013, P 21 ACM INT C MULT, P835, DOI [10.1145/2502081.2502224, DOI 10.1145/2502081.2502224]
[27]  
Feng Y., 2022, 2022 IEEE INT C MULT, P01, DOI [10.1109/icme52920.2022.9859769, DOI 10.1109/ICME52920.2022.9859769]
[28]  
Ganin Y, 2016, J MACH LEARN RES, V17
[29]  
Gao Y., 2020, arXiv preprint arXiv:2002.06758, V2020, P4447
[30]   Image Style Transfer Using Convolutional Neural Networks [J].
Gatys, Leon A. ;
Ecker, Alexander S. ;
Bethge, Matthias .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2414-2423