CTNet: Conversational Transformer Network for Emotion Recognition

被引:145
|
作者
Lian, Zheng [1 ,2 ]
Liu, Bin [1 ,2 ]
Tao, Jianhua [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China
[3] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Emotion recognition; Context modeling; Feature extraction; Fuses; Speech processing; Data models; Bidirectional control; Context-sensitive modeling; conversational transformer network (CTNet); conversational emotion recognition; multimodal fusion; speaker-sensitive modeling;
D O I
10.1109/TASLP.2021.3049898
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Emotion recognition in conversation is a crucial topic for its widespread applications in the field of human-computer interactions. Unlike vanilla emotion recognition of individual utterances, conversational emotion recognition requires modeling both context-sensitive and speaker-sensitive dependencies. Despite the promising results of recent works, they generally do not leverage advanced fusion techniques to generate the multimodal representations of an utterance. In this way, they have limitations in modeling the intra-modal and cross-modal interactions. In order to address these problems, we propose a multimodal learning framework for conversational emotion recognition, called conversational transformer network (CTNet). Specifically, we propose to use the transformer-based structure to model intra-modal and cross-modal interactions among multimodal features. Meanwhile, we utilize word-level lexical features and segment-level acoustic features as the inputs, thus enabling us to capture temporal information in the utterance. Additionally, to model context-sensitive and speaker-sensitive dependencies, we propose to use the multi-head attention based bi-directional GRU component and speaker embeddings. Experimental results on the IEMOCAP and MELD datasets demonstrate the effectiveness of the proposed method. Our method shows an absolute 2.1 similar to 6.2% performance improvement on weighted average F1 over state-of-the-art strategies.
引用
收藏
页码:985 / 1000
页数:16
相关论文
共 50 条
  • [1] DECN: Dialogical emotion correction network for conversational emotion recognition
    Lian, Zheng
    Liu, Bin
    Tao, Jianhua
    Liu, Bin, 1600, Elsevier B.V. (454): : 483 - 495
  • [2] DECN: Dialogical emotion correction network for conversational emotion recognition
    Lian, Zheng
    Liu, Bin
    Tao, Jianhua
    NEUROCOMPUTING, 2021, 454 : 483 - 495
  • [3] Directed Acyclic Graph Network for Conversational Emotion Recognition
    Shen, Weizhou
    Wu, Siyue
    Yang, Yunyi
    Quan, Xiaojun
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 1551 - 1560
  • [4] CTNet: Contrastive Transformer Network for Polyp Segmentation
    Xiao, Bin
    Hu, Jinwu
    Li, Weisheng
    Pun, Chi-Man
    Bi, Xiuli
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (09) : 5040 - 5053
  • [5] MES-CTNet: A Novel Capsule Transformer Network Base on a Multi-Domain Feature Map for Electroencephalogram-Based Emotion Recognition
    Du, Yuxiao
    Ding, Han
    Wu, Min
    Chen, Feng
    Cai, Ziman
    BRAIN SCIENCES, 2024, 14 (04)
  • [6] Quantum-inspired Neural Network for Conversational Emotion Recognition
    Li, Qiuchi
    Gkoumas, Dimitris
    Sordoni, Alessandro
    Nie, Jian-Yun
    Melucci, Massimo
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 13270 - 13278
  • [7] Topics Guided Multimodal Fusion Network for Conversational Emotion Recognition
    Yuan, Peicong
    Cai, Guoyong
    Chen, Ming
    Tang, Xiaolv
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 250 - 262
  • [8] MALN: Multimodal Adversarial Learning Network for Conversational Emotion Recognition
    Ren, Minjie
    Huang, Xiangdong
    Liu, Jing
    Liu, Ming
    Li, Xuanya
    Liu, An-An
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (11) : 6965 - 6980
  • [9] Gated transformer network based EEG emotion recognition
    Bilgin, Metin
    Mert, Ahmet
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (10) : 6903 - 6910
  • [10] CTNet: an efficient coupled transformer network for robust hyperspectral unmixing
    Meng, Fanlei
    Sun, Haixin
    Li, Jie
    Xu, Tingfa
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2024, 45 (17) : 5679 - 5712