Phonetic and Prosodic Information Estimation from Texts for Genuine Japanese End-to-End Text-to-Speech

被引:5
作者
Kakegawa, Naoto [1 ]
Hara, Sunao [1 ]
Abe, Masanobu [1 ]
Ijima, Yusuke [2 ]
机构
[1] Okayama Univ, Grad Sch Interdisciplinary Sci & Engn Hlth Syst, Okayama, Japan
[2] NTT Corp, Tokyo, Japan
来源
INTERSPEECH 2021 | 2021年
关键词
Text-to-speech; Grapheme-to-Phoneme (G2P); Attention mechanism; transformer; sequence-to-sequence neural networks;
D O I
10.21437/Interspeech.2021-914
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
The biggest obstacle to develop end-to-end Japanese text-to-speech (TTS) systems is to estimate phonetic and prosodic information (PPI) from Japanese texts. The following are the reasons: (1) the Kanji characters of the Japanese writing system have multiple corresponding pronunciations, (2) there is no separation mark between words, and (3) an accent nucleus must be assigned at appropriate positions. In this paper, we propose to solve the problems by neural machine translation (NMT) on the basis of encoder-decoder models, and compare NMT models of recurrent neural networks and the Transformer architecture. The proposed model handles texts on token (character) basis, although conventional systems handle them on word basis. To ensure the potential of the proposed approach, NMT models are trained using pairs of sentences and their PPIs that are generated by a conventional Japanese TTS system from 5 million sentences. Evaluation experiments were performed using PPIs that are manually annotated for 5,142 sentences. The experimental results showed that the Transformer architecture has the best performance, with 98.0% accuracy for phonetic information estimation and 95.0% accuracy for PPI estimation. Judging from the results, NMT models are promising toward end-to-end Japanese TTS.
引用
收藏
页码:126 / 130
页数:5
相关论文
共 36 条
[31]   A Unified Accent Estimation Method Based on Multi-Task Learning for Japanese Text-to-Speech [J].
Park, Byeongseon ;
Yamamoto, Ryuichi ;
Tachibana, Kentaro .
INTERSPEECH 2022, 2022, :1931-1935
[32]   E3TTS: End-to-End Text-Based Speech Editing TTS System and Its Applications [J].
Liang, Zheng ;
Ma, Ziyang ;
Du, Chenpeng ;
Yu, Kai ;
Chen, Xie .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 :4810-4821
[33]   ATTENTION-AUGMENTED END-TO-END MULTI-TASK LEARNING FOR EMOTION PREDICTION FROM SPEECH [J].
Zhang, Zixing ;
Wu, Bingwen ;
Schuller, Bjoern .
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, :6705-6709
[34]   Learning position information from attention: End-to-end weakly supervised crack segmentation with GANs [J].
Liu, Ye ;
Chen, Jun ;
Hou, Jia-ao .
COMPUTERS IN INDUSTRY, 2023, 149
[35]   GhostVec: Directly Extracting Speaker Embedding from End-to-End Speech Recognition Model Using Adversarial Examples [J].
Chen, Xiaojiao ;
Li, Sheng ;
Huang, Hao .
NEURAL INFORMATION PROCESSING, ICONIP 2022, PT VI, 2023, 1793 :482-492
[36]   Fast End-to-End Speech Recognition Via Non-Autoregressive Models and Cross-Modal Knowledge Transferring From BERT [J].
Bai, Ye ;
Yi, Jiangyan ;
Tao, Jianhua ;
Tian, Zhengkun ;
Wen, Zhengqi ;
Zhang, Shuai .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 :1897-1911