Multi-Level Temporal-Channel Speaker Retrieval for Zero-Shot Voice Conversion

被引:1
|
作者
Wang, Zhichao [1 ]
Xue, Liumeng [1 ]
Kong, Qiuqiang [2 ]
Xie, Lei [1 ]
Chen, Yuanzhe [2 ]
Tian, Qiao [2 ]
Wang, Yuping [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, ASLP Lab, Xian 710072, Peoples R China
[2] ByteDance SAMI Grp, Shanghai 200233, Peoples R China
关键词
Voice conversion; zero-shot; temporal-channel retrieval; attention mechanism; ATTENTION;
D O I
10.1109/TASLP.2024.3407577
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Zero-shot voice conversion (VC) converts source speech into the voice of any desired speaker using only one utterance of the speaker without requiring additional model updates. Typical methods use a speaker representation from a pre-trained speaker verification (SV) model or learn speaker representation during VC training to achieve zero-shot VC. However, existing speaker modeling methods overlook the variation of speaker information richness in temporal and frequency channel dimensions of speech. This insufficient speaker modeling hampers the ability of the VC model to accurately represent unseen speakers who are not in the training dataset. In this study, we present a robust zero-shot VC model with multi-level temporal-channel retrieval, referred to as MTCR-VC. Specifically, to flexibly adapt to the dynamic-variant speaker characteristic in the temporal and channel axis of the speech, we propose a novel fine-grained speaker modeling method, called temporal-channel retrieval (TCR), to find out when and where speaker information appears in speech. It retrieves variable-length speaker representation from both temporal and channel dimensions under the guidance of a pre-trained SV model. Besides, inspired by the hierarchical process of human speech production, the MTCR speaker module stacks several TCR blocks to extract speaker representations from multi-granularity levels. Furthermore, we introduce a cycle-based training strategy to simulate zero-shot inference recurrently to achieve better speech disentanglement and reconstruction. To drive this process, we adopt perceptual constraints on three aspects: content, style, and speaker. Experiments demonstrate that MTCR-VC is superior to the previous zero-shot VC methods in modeling speaker timbre while maintaining good speech naturalness.
引用
收藏
页码:2926 / 2937
页数:12
相关论文
共 50 条
  • [1] YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for Everyone
    Casanova, Edresson
    Weber, Julian
    Shulby, Christopher
    Candido Junior, Arnaldo
    Goelge, Eren
    Ponti, Moacir Antonelli
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [2] Zero-Shot Unseen Speaker Anonymization via Voice Conversion
    Chang, Hyung-Pil
    Yoo, In-Chul
    Jeong, Changhyeon
    Yook, Dongsuk
    IEEE ACCESS, 2022, 10 : 130190 - 130199
  • [3] ZERO-SHOT VOICE CONVERSION WITH ADJUSTED SPEAKER EMBEDDINGS AND SIMPLE ACOUSTIC FEATURES
    Tan, Zhiyuan
    Wei, Jianguo
    Xu, Junhai
    He, Yuqing
    Lu, Wenhuan
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5964 - 5968
  • [4] DGC-VECTOR: A NEW SPEAKER EMBEDDING FOR ZERO-SHOT VOICE CONVERSION
    Xiao, Ruitong
    Zhang, Haitong
    Lin, Yue
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6547 - 6551
  • [5] U-Style: Cascading U-Nets With Multi-Level Speaker and Style Modeling for Zero-Shot Voice Cloning
    Li, Tao
    Wang, Zhichao
    Zhu, Xinfa
    Cong, Jian
    Tian, Qiao
    Wang, Yuping
    Xie, Lei
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 4026 - 4035
  • [6] Zero-shot voice conversion based on feature disentanglement
    Guo, Na
    Wei, Jianguo
    Li, Yongwei
    Lu, Wenhuan
    Tao, Jianhua
    SPEECH COMMUNICATION, 2024, 165
  • [7] CA-VC: A Novel Zero-Shot Voice Conversion Method With Channel Attention
    Xiao, Ruitong
    Xing, Xiaofen
    Yang, Jichen
    Xu, Xiangmin
    2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 800 - 807
  • [8] DeID-VC: Speaker De-identification via Zero-shot Pseudo Voice Conversion
    Yuan, Ruibin
    Wu, Yuxuan
    Li, Jacob
    Kim, Jaxter
    INTERSPEECH 2022, 2022, : 2593 - 2597
  • [9] Improvement Speaker Similarity for Zero-Shot Any-to-Any Voice Conversion of Whispered and Regular Speech
    Avdeeva, Anastasia
    Gusev, Aleksei
    INTERSPEECH 2024, 2024, : 2735 - 2739
  • [10] Towards Improved Zero-shot Voice Conversion with Conditional DSVAE
    Lian, Jiachen
    Zhang, Chunlei
    Anumanchipalli, Gopala Krishna
    Yu, Dong
    INTERSPEECH 2022, 2022, : 2598 - 2602