INVESTIGATING DISENTANGLEMENT IN A PHONEME-LEVEL SPEECH CODEC FOR PROSODY MODELING

被引:0
作者
Karapiperis, Sotirios [1 ]
Ellinas, Nikolaos [1 ]
Vioni, Alexandra [1 ]
Oh, Junkwang [2 ]
Jho, Gunu [2 ]
Hwang, Inchul [2 ]
Raptis, Spyros [1 ]
机构
[1] Samsung Elect, Innoet, Maroussi, Greece
[2] Samsung Elect, Mobile eXperience Business, Seoul, South Korea
来源
2024 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT | 2024年
关键词
Prosody Modeling; Speech Synthesis; Vector Quantization; RVQ-VAE;
D O I
10.1109/SLT61566.2024.10832258
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Most of the prevalent approaches in speech prosody modeling rely on learning global style representations or a continuous latent space which encode and transfer the attributes of reference speech. However, recent work on neural codecs which are based on Residual Vector Quantization (RVQ) already shows great potential offering distinct advantages. We investigate the prosody modeling capabilities of the discrete space of such an RVQ-VAE model, modifying it to operate on the phoneme-level. We condition both the encoder and decoder of the model on linguistic representations and apply a global speaker embedding in order to factor out both phonetic and speaker information. We conduct an extensive set of investigations based on subjective experiments and objective measures to show that the phoneme-level discrete latent representations obtained this way achieve a high degree of disentanglement, capturing fine-grained prosodic information that is robust and transferable. The latent space turns out to have interpretable structure with its principal components corresponding to pitch and energy.
引用
收藏
页码:668 / 674
页数:7
相关论文
共 31 条
[1]   AudioLM: A Language Modeling Approach to Audio Generation [J].
Borsos, Zalan ;
Marinier, Raphael ;
Vincent, Damien ;
Kharitonov, Eugene ;
Pietquin, Olivier ;
Sharifi, Matt ;
Roblek, Dominik ;
Teboul, Olivier ;
Grangier, David ;
Tagliasacchi, Marco ;
Zeghidour, Neil .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 :2523-2533
[2]  
Borsos Z, 2023, Arxiv, DOI arXiv:2305.09636
[3]  
Chu W, 2009, INT CONF ACOUST SPEE, P3969, DOI 10.1109/ICASSP.2009.4960497
[4]  
Defossez Alexandre, 2022, arXiv
[5]  
Fong Jason, 2021, 11 ISCA SPEECH SYNTH, P27
[6]   Conformer: Convolution-augmented Transformer for Speech Recognition [J].
Gulati, Anmol ;
Qin, James ;
Chiu, Chung-Cheng ;
Parmar, Niki ;
Zhang, Yu ;
Yu, Jiahui ;
Han, Wei ;
Wang, Shibo ;
Zhang, Zhengdong ;
Wu, Yonghui ;
Pang, Ruoming .
INTERSPEECH 2020, 2020, :5036-5040
[7]  
Hsu WN, 2018, Arxiv, DOI arXiv:1810.07217
[8]  
Jiang ZY, 2024, Arxiv, DOI arXiv:2307.07218
[9]  
Kim Jaehyeon, 2021, P MACHINE LEARNING R, V139
[10]  
Kong Jungil, 2020, ADV NEUR IN, V33