Context-and Sentiment-Aware Networks for Emotion Recognition in Conversation

被引:45
|
作者
Tu G. [1 ]
Wen J. [1 ]
Liu C. [1 ]
Jiang D. [1 ]
Cambria E. [2 ]
机构
[1] Shantou University, Department of Computer Science, Shantou
[2] Nanyang Technological University, School of Computer Science and Engineering, Singapore
来源
基金
中国国家自然科学基金;
关键词
Common-sense knowledge graph; dialogue transformer (DT); emotion recognition; graph attention network;
D O I
10.1109/TAI.2022.3149234
中图分类号
学科分类号
摘要
Emotion recognition in conversation (ERC) has promising potential in many fields, such as recommendation systems, man-machine interaction, and medical care. In contrast to other emotion identification tasks, conversation is essentially a process of dynamic interaction in which people often convey emotional messages relying on context and common-sense knowledge. In this article, we propose a context-and sentiment-aware framework, termed Sentic GAT, to solve this challenge. In Sentic GAT, common-sense knowledge is dynamically represented by the context-and sentiment-aware graph attention mechanism based on sentimental consistency, and context information is captured by the dialogue transformer (DT) with hierarchical multihead attention (HMAT), where HMAT is used to obtain the dependency of historical utterances on themselves and other utterances for better context representation. Additionally, we explore a contrastive loss to discriminate context-free and context-sensitive utterances in emotion identification to enhance context representation in straightforward conversations that directly express ideas. The experimental results show that context and sentimental information can promote the representation of common-sense knowledge, and the intra-and inter-dependency of contextual utterances effectively improve the performance of Sentic GAT. Moreover, our Sentic GAT using emotional intensity outperforms the most advanced model on the tested datasets. © 2020 IEEE.
引用
收藏
页码:699 / 708
页数:9
相关论文
共 50 条
  • [21] Prompt Tuning Models on Sentiment-Aware for Explainable Recommendation
    Long, Xiuhua
    Jin, Ting
    COGNITIVE COMPUTING - ICCC 2023, 2024, 14207 : 116 - 132
  • [22] Sentiment-aware multimodal pre-training for multimodal sentiment analysis
    Ye, Junjie
    Zhou, Jie
    Tian, Junfeng
    Wang, Rui
    Zhou, Jingyi
    Gui, Tao
    Zhang, Qi
    Huang, Xuanjing
    KNOWLEDGE-BASED SYSTEMS, 2022, 258
  • [23] Sentiment-Aware Emoji Insertion Via Sequence Tagging
    Lin, Fuqiang
    Ma, Xingkong
    Min, Erxue
    Liu, Bo
    Song, Yiping
    IEEE MULTIMEDIA, 2021, 28 (02) : 40 - 48
  • [24] RAKCR: Reviews sentiment-aware based knowledge graph convolutional networks for Personalized Recommendation
    Cui, Yachao
    Yu, Hongli
    Guo, Xiaoxu
    Cao, Han
    Wang, Lei
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 248
  • [25] Learning Domain-specific Sentiment Lexicon with Supervised Sentiment-aware LDA
    Yang, Min
    Zhu, Dingju
    Mustafa, Rashed
    Chow, Kam-Pui
    21ST EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2014), 2014, 263 : 927 - +
  • [26] Unsupervised Ontology- and Sentiment-Aware Review Summarization
    Le, Nhat X. T.
    Young, Neal
    Hristidis, Vagelis
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2019, 2019, 11881 : 747 - 762
  • [27] FuncSA: Function Words-Guided Sentiment-Aware Attention for Chinese Sentiment Analysis
    Wang, Jiajia
    Zan, Hongying
    Han, Yingjie
    Cao, Juan
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT I, 2022, 13551 : 539 - 550
  • [28] Sentiment-aware personalized tweet recommendation through multimodal FFM
    Harakawa, Ryosuke
    Takehara, Daichi
    Ogawa, Takahiro
    Haseyama, Miki
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (14) : 18741 - 18759
  • [29] Learning Domain-Sensitive and Sentiment-Aware Word Embeddings
    Shi, Bei
    Fu, Zihao
    Bing, Lidong
    Lam, Wai
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 2494 - 2504
  • [30] Context-aware Multimodal Fusion for Emotion Recognition
    Li, Jinchao
    Wang, Shuai
    Chao, Yang
    Liu, Xunying
    Meng, Helen
    INTERSPEECH 2022, 2022, : 2013 - 2017