TACOTRON-BASED ACOUSTIC MODEL USING PHONEME ALIGNMENT FOR PRACTICAL NEURAL TEXT-TO-SPEECH SYSTEMS

被引:0
作者
Okamoto, Takuma [1 ]
Toda, Tomoki [1 ,2 ]
Shiga, Yoshinori [1 ]
Kawai, Hisashi [1 ]
机构
[1] Natl Inst Informat & Commun Technol, Tokyo, Japan
[2] Nagoya Univ, Informat Technol Ctr, Nagoya, Aichi, Japan
来源
2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019) | 2019年
关键词
Speech synthesis; neural text-to-speech; duration model; forced alignment; sequence-to-sequence model; ATTENTION;
D O I
10.1109/asru46091.2019.9003956
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although sequence-to-sequence (seq2seq) models with attention mechanism in neural text-to-speech (TTS) systems, such as Tacotron 2, can jointly optimize duration and acoustic models, and realize high-fidelity synthesis compared with conventional durationacoustic pipeline models, these involve a risk that speech samples cannot be sometimes successfully synthesized due to the attention prediction errors. Therefore, these seq2seq models cannot be directly introduced in practical TTS systems. On the other hand, the conventional pipeline models are broadly used in practical TTS systems since there are few crucial prediction errors in the duration model. For realizing high-quality practical TTS systems without attention prediction errors, this paper investigates Tacotron-based acoustic models with phoneme alignment instead of attention. The phoneme durations are first obtained from HMM-based forced alignment and the duration model is a simple bidirectional LSTM-based network. Then, a seq2seq model with forced alignment instead of attention is investigated and an alternative model with Tacotron decoder and phoneme duration is proposed. The results of experiments with full-context label input using WaveGlow vocoder indicate that the proposed model can realize a high-fidelity TTS system for Japanese with a real-time factor of 0.13 using a GPU without attention prediction errors compared with the seq2seq models.
引用
收藏
页码:214 / 221
页数:8
相关论文
共 61 条
  • [21] Kawahara H., 2005, P INT 2005 LISB PORT, P537
  • [22] Kim Sungwon, 2019, Proceedings of Machine Learning Research, P3370
  • [23] Latorre J, 2019, INT CONF ACOUST SPEE, P7075, DOI 10.1109/ICASSP.2019.8682168
  • [24] Li M., 2016, 9th ISCA Speech Synthesis Workshop, P196
  • [25] Li NH, 2019, AAAI CONF ARTIF INTE, P6706
  • [26] Deep Learning for Acoustic Modeling in Parametric Speech Generation
    Ling, Zhen-Hua
    Kang, Shi-Yin
    Zen, Heiga
    Senior, Andrew
    Schuster, Mike
    Qian, Xiao-Jun
    Meng, Helen
    Deng, Li
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2015, 32 (03) : 35 - 52
  • [27] Liu DR, 2018, IEEE W SP LANG TECH, P640, DOI 10.1109/SLT.2018.8639672
  • [28] Lu YF, 2019, INT CONF ACOUST SPEE, P7050, DOI 10.1109/ICASSP.2019.8682368
  • [29] Mametani K, 2019, INT CONF ACOUST SPEE, P6920, DOI 10.1109/ICASSP.2019.8683857
  • [30] Mehri S., 2017, P INT C LEARN REPR I