Enhancing Word-Level Semantic Representation via Dependency Structure for Expressive Text-to-Speech Synthesis

被引:2
|
作者
Zhou, Yixuan [1 ,4 ]
Song, Changhe [1 ]
Li, Jingbei [1 ]
Wu, Zhiyong [1 ,2 ]
Bian, Yanyao [3 ]
Su, Dan [3 ]
Meng, Helen [2 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[3] Tencent, Tencent AI Lab, Shenzhen, Peoples R China
[4] Tencent, Shenzhen, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
expressive speech synthesis; semantic representation enhancing; dependency parsing; graph neural network;
D O I
10.21437/Interspeech.2022-10061
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Exploiting rich linguistic information in raw text is crucial for expressive text-to-speech (TTS). As large scale pre-trained text representation develops, bidirectional encoder representations from Transformers (BERT) has been proven to embody semantic information and employed to TTS recently. However, original or simply fine-tuned BERT embeddings still cannot provide sufficient semantic knowledge that expressive TTS models should take into account. In this paper, we propose a word-level semantic representation enhancing method based on dependency structure and pre-trained BERT embedding. The BERT embedding of each word is reprocessed considering its specific dependencies and related words in the sentence, to generate more effective semantic representation for TTS. To better utilize the dependency structure, relational gated graph network (RGGN) is introduced to make semantic information flow and aggregate through the dependency structure. The experimental results show that the proposed method can further improve the naturalness and expressiveness of synthesized speeches on both Mandarin and English datasets(1).
引用
收藏
页码:5518 / 5522
页数:5
相关论文
共 50 条
  • [41] RWEN-TTS: Relation-Aware Word Encoding Network for Natural Text-to-Speech Synthesis
    Oh, Shinhyeok
    Noh, HyeongRae
    Hong, Yoonseok
    Oh, Insoo
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13428 - 13436
  • [42] Which Resemblance is Useful to Predict Phrase Boundary Rise Labels for Japanese Expressive Text-to-speech Synthesis, Numerically-Expressed Stylistic or Distribution-based Semantic?
    Nakajima, Hideharu
    Mizuno, Hideyuki
    Yoshioka, Osamu
    Takahashi, Satoshi
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 1046 - 1050
  • [43] Investigation of Using Continuous Representation of Various Linguistic Units in Neural Network Based Text-to-Speech Synthesis
    Wang, Xin
    Takaki, Shinji
    Yamagishi, Junichi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2016, E99D (10): : 2471 - 2480
  • [44] Syllable-level representations of suprasegmental features for DNN-based text-to-speech synthesis
    Ribeiro, Manuel Sam
    Watts, Oliver
    Yamagishi, Junichi
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 3186 - 3190
  • [45] Integrating Discrete Word-Level Style Variations into Non-Autoregressive Acoustic Models for Speech Synthesis
    Liu, Zhaoci
    Wu, Ningqian
    Zhang, Yajie
    Ling, Zhenhua
    INTERSPEECH 2022, 2022, : 5508 - 5512
  • [46] Prosody Aware Word-level Encoder Based on BLSTM-RNNs for DNN-based Speech Synthesis
    Ijima, Yusuke
    Hojo, Nobukatsu
    Masumura, Ryo
    Asami, Taichi
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 764 - 768
  • [47] Fine-grained Style Modeling, Transfer and Prediction in Text-to-Speech Synthesis via Phone-Level Content-Style Disentanglement
    Tan, Daxin
    Lee, Tan
    INTERSPEECH 2021, 2021, : 4683 - 4687
  • [48] Emotion-controllable Speech Synthesis Using Emotion Soft Label, Utterance-level Prosodic Factors, and Word-level Prominence
    Luo, Xuan
    Takamichi, Shinnosuke
    Saito, Yuki
    Koriyama, Tomoki
    Saruwatari, Hiroshi
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2024, 13 (01)
  • [49] ENHANCING SPEAKING STYLES IN CONVERSATIONAL TEXT-TO-SPEECH SYNTHESIS WITH GRAPH-BASED MULTI-MODAL CONTEXT MODELING
    Li, Jingbei
    Meng, Yi
    Li, Chenyi
    Wu, Zhiyong
    Meng, Helen
    Weng, Chao
    Su, Dan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7917 - 7921
  • [50] Cross-lingual Text-To-Speech Synthesis via Domain Adaptation and Perceptual Similarity Regression in Speaker Space
    Xin, Detai
    Saito, Yuki
    Takamichi, Shinnosuke
    Koriyama, Tomoki
    Saruwatari, Hiroshi
    INTERSPEECH 2020, 2020, : 2947 - 2951