Multi-Level Temporal-Channel Speaker Retrieval for Zero-Shot Voice Conversion

被引:1
|
作者
Wang, Zhichao [1 ]
Xue, Liumeng [1 ]
Kong, Qiuqiang [2 ]
Xie, Lei [1 ]
Chen, Yuanzhe [2 ]
Tian, Qiao [2 ]
Wang, Yuping [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, ASLP Lab, Xian 710072, Peoples R China
[2] ByteDance SAMI Grp, Shanghai 200233, Peoples R China
关键词
Voice conversion; zero-shot; temporal-channel retrieval; attention mechanism; ATTENTION;
D O I
10.1109/TASLP.2024.3407577
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Zero-shot voice conversion (VC) converts source speech into the voice of any desired speaker using only one utterance of the speaker without requiring additional model updates. Typical methods use a speaker representation from a pre-trained speaker verification (SV) model or learn speaker representation during VC training to achieve zero-shot VC. However, existing speaker modeling methods overlook the variation of speaker information richness in temporal and frequency channel dimensions of speech. This insufficient speaker modeling hampers the ability of the VC model to accurately represent unseen speakers who are not in the training dataset. In this study, we present a robust zero-shot VC model with multi-level temporal-channel retrieval, referred to as MTCR-VC. Specifically, to flexibly adapt to the dynamic-variant speaker characteristic in the temporal and channel axis of the speech, we propose a novel fine-grained speaker modeling method, called temporal-channel retrieval (TCR), to find out when and where speaker information appears in speech. It retrieves variable-length speaker representation from both temporal and channel dimensions under the guidance of a pre-trained SV model. Besides, inspired by the hierarchical process of human speech production, the MTCR speaker module stacks several TCR blocks to extract speaker representations from multi-granularity levels. Furthermore, we introduce a cycle-based training strategy to simulate zero-shot inference recurrently to achieve better speech disentanglement and reconstruction. To drive this process, we adopt perceptual constraints on three aspects: content, style, and speaker. Experiments demonstrate that MTCR-VC is superior to the previous zero-shot VC methods in modeling speaker timbre while maintaining good speech naturalness.
引用
收藏
页码:2926 / 2937
页数:12
相关论文
共 50 条
  • [41] Generalized Zero-Shot Learning With Multi-Channel Gaussian Mixture VAE
    Shao, Jie
    Li, Xiaorui
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 (27) : 456 - 460
  • [42] Streamable Speech Representation Disentanglement and Multi-Level Prosody Modeling for Live One-Shot Voice Conversion
    Yang, Haoquan
    Deng, Liqun
    Yeung, Yu Ting
    Zheng, Nianzu
    Xu, Yong
    INTERSPEECH 2022, 2022, : 2578 - 2582
  • [43] Two-stage and Self-supervised Voice Conversion for Zero-Shot Dysarthric Speech Reconstruction
    Liu, Dong
    Lin, Yueqian
    Bu, Hui
    Li, Ming
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 423 - 427
  • [44] WESPER: Zero-shot and Realtime Whisper to Normal Voice Conversion for Whisper-based Speech Interactions
    Rekimoto, Jun
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2023, 2023,
  • [45] LM-VC: Zero-Shot Voice Conversion via Speech Generation Based on Language Models
    Wang Z.
    Chen Y.
    Xie L.
    Tian Q.
    Wang Y.
    IEEE Signal Processing Letters, 2023, 30 : 1157 - 1161
  • [46] Comparison of Multi-Scale Speaker Vectors and S-Vectors for Zero-Shot Speech Synthesis
    Cory, Tristin
    Iqbal, Razib
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2022, : 247 - 248
  • [47] SC-CNN: Effective Speaker Conditioning Method for Zero-Shot Multi-Speaker Text-to-Speech Systems
    Yoon, Hyungchan
    Kim, Changhwan
    Um, Seyun
    Yoon, Hyun-Wook
    Kang, Hong-Goo
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 593 - 597
  • [48] SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model
    Casanova, Edresson
    Shulby, Christopher
    Golge, Eren
    Muller, Nicolas Michael
    de Oliveira, Frederico Santos
    Candido Junior, Arnaldo
    Soares, Anderson da Silva
    Aluisio, Sandra Maria
    Ponti, Moacir Antonelli
    INTERSPEECH 2021, 2021, : 3645 - 3649
  • [49] Multi-level Fusion of Multi-modal Semantic Embeddings for Zero Shot Learning
    Kong, Zhe
    Wang, Xin
    Gao, Neng
    Zhang, Yifei
    Liu, Yuhan
    Tu, Chenyang
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2022, 2022, : 310 - 318
  • [50] Multi-level alignment for few-shot temporal action localization
    Keisham, Kanchan
    Jalali, Amin
    Kim, Jonghong
    Lee, Minho
    INFORMATION SCIENCES, 2023, 650