GRAPHPB: GRAPHICAL REPRESENTATIONS OF PROSODY BOUNDARY IN SPEECH SYNTHESIS

被引:4
作者
Sun, Aolan [1 ]
Wang, Jianzong [1 ]
Cheng, Ning [1 ]
Peng, Huayi [1 ]
Zeng, Zhen [1 ]
Kong, Lingwei [1 ]
Xiao, Jing [1 ]
机构
[1] Ping An Technol Shenzhen Co Ltd, Shenzhen, Guangdong, Peoples R China
来源
2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT) | 2021年
关键词
graph neural network; neural text-to-speech; speech synthesis; prosody modelling; FEATURES;
D O I
10.1109/SLT48900.2021.9383530
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a graphical representation approach of prosody boundary (GraphPB) in the task of Chinese speech synthesis, intending to parse the semantic and syntactic relationship of input sequences in a graphical domain for improving the prosody performance. The nodes of the graph embedding are formed by prosodic words, and the edges are formed by the other prosodic boundaries, namely prosodic phrase boundary (PPH) and intonation phrase boundary (IPH). Different Graph Neural Networks (GNN) like Gated Graph Neural Network (GGNN) and Graph Long Short-term Memory (G-LSTM) are utilised as graph encoders to exploit the graphical prosody boundary information. Graph-to-sequence model is proposed and formed by a graph encoder and an attentional decoder. Two techniques are proposed to embed sequential information into the graph-to-sequence text-to-speech model. The experimental results show that this proposed approach can encode the phonetic and prosody rhythm of an utterance. The mean opinion score (MOS) of these GNN models shows comparative results with the state-of-the-art sequence-to-sequence models with better performance in the aspect of prosody. This provides an alternative approach for prosody modelling in end-to-end speech synthesis.
引用
收藏
页码:438 / 445
页数:8
相关论文
共 38 条
  • [1] [Anonymous], 2016, GATED GRAPH SEQUENCE
  • [2] [Anonymous], 2018, P 35 INT C MACH LEAR
  • [3] Aubin Adele, 2019, P INTERSPEECH 2019, P4470, DOI DOI 10.21437/INTERSPEECH.2019-1945
  • [4] Bastings J, 2017, P 2017 C EMP METH NA, P1957
  • [5] Beck D, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P273
  • [6] Bian Yanyao, 2019, ARXIV PREPRINT ARXIV
  • [7] Carmichael Lesley., 2002, J ACOUST SOC AM, V112, P2443
  • [8] Choi H, 2019, INT CONF ACOUST SPEE, P6950, DOI 10.1109/ICASSP.2019.8683682
  • [9] Chu Min, 2001, INT J COMPUTATIONAL, V6
  • [10] Databaker (Beijing) Technology Co.,Ltd, 2020, CHIN STAND MAND SPEE