MM-TTS: Multi-Modal Prompt Based Style Transfer for Expressive Text-to-Speech Synthesis

被引:0
|
作者
Guan, Wenhao [1 ]
Li, Yishuang [2 ]
Li, Tao [1 ]
Huang, Hukai [1 ]
Wang, Feng [1 ]
Lin, Jiayan [1 ]
Huang, Lingyan [1 ]
Li, Lin [2 ,3 ]
Hong, Qingyang [1 ]
机构
[1] Xiamen Univ, Sch Informat, Xiamen, Peoples R China
[2] Xiamen Univ, Inst Artificial Intelligence, Xiamen, Peoples R China
[3] Xiamen Univ, Sch Elect Sci & Engn, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The style transfer task in Text-to-Speech (TTS) refers to the process of transferring style information into text content to generate corresponding speech with a specific style. However, most existing style transfer approaches are either based on fixed emotional labels or reference speech clips, which cannot achieve flexible style transfer. Recently, some methods have adopted text descriptions to guide style transfer. In this paper, we propose a more flexible multi-modal and style controllable TTS framework named MM-TTS. It can utilize any modality as the prompt in unified multi-modal prompt space, including reference speech, emotional facial images, and text descriptions, to control the style of the generated speech in a system. The challenges of modeling such a multi-modal style controllable TTS mainly lie in two aspects: 1) aligning the multi-modal information into a unified style space to enable the input of arbitrary modality as the style prompt in a single system, and 2) efficiently transferring the unified style representation into the given text content, thereby empowering the ability to generate prompt style-related voice. To address these problems, we propose an aligned multi-modal prompt encoder that embeds different modalities into a unified style space, supporting style transfer for different modalities. Additionally, we present a new adaptive style transfer method named Style Adaptive Convolutions (SAConv) to achieve a better style representation. Furthermore, we design a Rectified Flow based Refiner to solve the problem of over-smoothing Mel-spectrogram and generate audio of higher fidelity. Since there is no public dataset for multi-modal TTS, we construct a dataset named MEAD-TTS, which is related to the field of expressive talking head. Our experiments on the MEAD-TTS dataset and out-of-domain datasets demonstrate that MM-TTS can achieve satisfactory results based on multimodal prompts. The audio samples and constructed dataset are available at https://multimodal- tts.github.io.
引用
收藏
页码:18117 / 18125
页数:9
相关论文
共 31 条
  • [21] DiCLET-TTS: Diffusion Model Based Cross-Lingual Emotion Transfer for Text-to-Speech - A Study Between English and Mandarin
    Li T.
    Hu C.
    Cong J.
    Zhu X.
    Li J.
    Tian Q.
    Wang Y.
    Xie L.
    IEEE/ACM Transactions on Audio Speech and Language Processing, 2023, 31 : 3418 - 3430
  • [22] JOINT VESSEL SEGMENTATION AND DEFORMABLE REGISTRATION ON MULTI-MODAL RETINAL IMAGES BASED ON STYLE TRANSFER
    Zhang, Junkang
    An, Cheolhong
    Dai, Ji
    Amador, Manuel
    Bartsch, Dirk-Uwe
    Borooah, Shyamanga
    Freeman, William R.
    Nguyen, Truong Q.
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 839 - 843
  • [23] Fine-grained Style Modeling, Transfer and Prediction in Text-to-Speech Synthesis via Phone-Level Content-Style Disentanglement
    Tan, Daxin
    Lee, Tan
    INTERSPEECH 2021, 2021, : 4683 - 4687
  • [24] StEP: Style-based Encoder Pre-training for Multi-modal Image Synthesis
    Meshry, Moustafa
    Ren, Yixuan
    Davis, Larry S.
    Shrivastava, Abhinav
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3711 - 3720
  • [25] A Universal Multi-Speaker Multi-Style Text-to-Speech via Disentangled Representation Learning based on Renyi Divergence Minimization
    Paul, Dipjyoti
    Mukherjee, Sankar
    Pantazis, Yannis
    Stylianou, Yannis
    INTERSPEECH 2021, 2021, : 3625 - 3629
  • [26] ZET-Speech: Zero-shot adaptive Emotion-controllable Text-to-Speech Synthesis with Diffusion and Style-based Models
    Kang, Minki
    Han, Wooseok
    Hwang, Sung Ju
    Yang, Eunho
    INTERSPEECH 2023, 2023, : 4339 - 4343
  • [27] TEXT-TO-SPEECH SYNTHESIS USING STFT SPECTRA BASED ON LOW-/MULTI-RESOLUTION GENERATIVE ADVERSARIAL NETWORKS
    Saito, Yuki
    Takamichi, Shinnosuke
    Saruwatari, Hiroshi
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5299 - 5303
  • [28] Transfer Learning, Style Control, and Speaker Reconstruction Loss for Zero-Shot Multilingual Multi-Speaker Text-to-Speech on Low-Resource Languages
    Azizah, Kurniawati
    Jatmiko, Wisnu
    IEEE ACCESS, 2022, 10 : 5895 - 5911
  • [29] An RNN-based Quantized F0 Model with Multi-tier Feedback Links for Text-to-Speech Synthesis
    Wang, Xin
    Takaki, Shinji
    Yamagishi, Junichi
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 1059 - 1063
  • [30] Which Resemblance is Useful to Predict Phrase Boundary Rise Labels for Japanese Expressive Text-to-speech Synthesis, Numerically-Expressed Stylistic or Distribution-based Semantic?
    Nakajima, Hideharu
    Mizuno, Hideyuki
    Yoshioka, Osamu
    Takahashi, Satoshi
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 1046 - 1050