Many-to-Many Voice Transformer Network

被引:20
作者
Kameoka, Hirokazu [1 ]
Huang, Wen-Chin [2 ]
Tanaka, Kou [1 ]
Kaneko, Takuhiro [1 ]
Hojo, Nobukatsu [1 ]
Toda, Tomoki [2 ]
机构
[1] NTT Corp, NTT Commun Sci Labs, Atsugi, Kanagawa 2430198, Japan
[2] Nagoya Univ, Nagoya, Aichi 4648601, Japan
关键词
Training; Acoustics; Computational modeling; Decoding; Data models; Training data; Computer architecture; Attention; many-to-many VC; sequence-to-sequence learning; voice conversion (VC); transformer network; CONVOLUTIONAL SEQUENCE; CONVERSION; SPEECH;
D O I
10.1109/TASLP.2020.3047262
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This paper proposes a voice conversion (VC) method based on a sequence-to-sequence (S2S) learning framework, which enables simultaneous conversion of the voice characteristics, pitch contour, and duration of input speech. We previously proposed an S2S-based VC method using a transformer network architecture called the voice transformer network (VTN). The original VTN was designed to learn only a mapping of speech feature sequences from one speaker to another. Here, the main idea we propose is an extension of the original VTN that can simultaneously learn mappings among multiple speakers. This extension, called the many-to-many VTN, enables us to fully use available training data collected from multiple speakers by capturing common latent features that can be shared across different speakers. It also allows us to introduce a training loss called the identity mapping loss to ensure that the input feature sequence will remain unchanged when the source and target speaker indices are the same. Using this particular loss for model training has been found to be extremely effective in improving the performance of the model at test time. We conducted speaker identity conversion experiments and found that our model obtained higher sound quality and speaker similarity than baseline methods. We also found that our model, with a slight modification to its architecture, can handle any-to-many conversion tasks reasonably well.
引用
收藏
页码:656 / 670
页数:15
相关论文
共 76 条
  • [1] [Anonymous], 2019, Proceedings of Machine Learning Research
  • [2] [Anonymous], 2018, P 35 INT C MACHINE L
  • [3] [Anonymous], 2018, SPEAK LANG REC WORKS, DOI DOI 10.21437/ODYSSEY.2018-29
  • [4] Arik SÖ, 2017, ADV NEUR IN, V30
  • [5] Arik SO, 2017, PR MACH LEARN RES, V70
  • [6] Parrotron: An End-to-End Speech-to-Speech Conversion Model and its Applications to Hearing-Impaired Speech and Speech Separation
    Biadsy, Fadi
    Weiss, Ron J.
    Moreno, Pedro J.
    Kanvesky, Dimitri
    Jia, Ye
    [J]. INTERSPEECH 2019, 2019, : 4115 - 4119
  • [7] Chorowski J, 2015, ADV NEUR IN, V28
  • [8] Spectral Mapping Using Artificial Neural Networks for Voice Conversion
    Desai, Srinivas
    Black, Alan W.
    Yegnanarayana, B.
    Prahallad, Kishore
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2010, 18 (05): : 954 - 964
  • [9] Foreign accent conversion in computer assisted pronunciation training
    Felps, Daniel
    Bortfeld, Heather
    Gutierrez-Osuna, Ricardo
    [J]. SPEECH COMMUNICATION, 2009, 51 (10) : 920 - 932
  • [10] Fukada T., 1992, ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech and Signal Processing (Cat. No.92CH3103-9), P137, DOI 10.1109/ICASSP.1992.225953