A Comparative Study of Different Pre-trained Language Models for Sentiment Analysis of Human-Computer Negotiation Dialogue

被引:1
|
作者
Dong, Jing [1 ]
Luo, Xudong [1 ,2 ,3 ]
Zhu, Junlin [1 ]
机构
[1] Guangxi Normal Univ, Sch Comp Sci & Engn, Guilin 541004, Guangxi, Peoples R China
[2] Minist Educ, Key Lab Educ Blockchain & Intelligent Technol, Guilin 541004, Guangxi, Peoples R China
[3] Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Guangxi, Peoples R China
关键词
Sentiment analysis; Pre-trained language model; Fine-grained; Fine-tuning; Human-computer dialogue; ANGER;
D O I
10.1007/978-981-97-5501-1_23
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper offers a comprehensive comparative study of various pre-trained language models for sentiment analysis in human-computer negotiation dialogues. It examines numerous state-of-the-art PLMs, including GPT-3.5, BERT, and its variants, along with other models like Claude, ELECTRA, NEZHA, ERNIE 3.0, BART, and XLNet, focusing particularly on their effectiveness in sentiment detection in negotiation dialogues. Using a large, diverse dataset annotated with sentiment labels, the study assesses these models using accuracy, precision, recall, and F1 metrics. The findings highlight distinct performance differences among the models, providing insights for future research in automated negotiation systems and sentiment analysis in this context.
引用
收藏
页码:301 / 317
页数:17
相关论文
共 50 条
  • [1] Enhancing Turkish Sentiment Analysis Using Pre-Trained Language Models
    Koksal, Omer
    29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,
  • [2] A Study of Vietnamese Sentiment Classification with Ensemble Pre-trained Language Models
    Thin, Dang Van
    Hao, Duong Ngoc
    Nguyen, Ngan Luu-Thuy
    VIETNAM JOURNAL OF COMPUTER SCIENCE, 2024, 11 (01) : 137 - 165
  • [3] Pre-trained language models evaluating themselves - A comparative study
    Koch, Philipp
    Assenmacher, Matthias
    Heumann, Christian
    PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 180 - 187
  • [4] A Comparative Study of Pre-trained Word Embeddings for Arabic Sentiment Analysis
    Zouidine, Mohamed
    Khalil, Mohammed
    2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 1243 - 1248
  • [5] Explainable Pre-Trained Language Models for Sentiment Analysis in Low-Resourced Languages
    Mabokela, Koena Ronny
    Primus, Mpho
    Celik, Turgay
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (11)
  • [6] Leveraging Pre-trained Language Model for Speech Sentiment Analysis
    Shon, Suwon
    Brusco, Pablo
    Pan, Jing
    Han, Kyu J.
    Watanabe, Shinji
    INTERSPEECH 2021, 2021, : 3420 - 3424
  • [7] AraXLNet: pre-trained language model for sentiment analysis of Arabic
    Alduailej, Alhanouf
    Alothaim, Abdulrahman
    JOURNAL OF BIG DATA, 2022, 9 (01)
  • [8] Aspect Based Sentiment Analysis by Pre-trained Language Representations
    Liang Tianxin
    Yang Xiaoping
    Zhou Xibo
    Wang Bingqian
    2019 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2019), 2019, : 1262 - 1265
  • [9] AraXLNet: pre-trained language model for sentiment analysis of Arabic
    Alhanouf Alduailej
    Abdulrahman Alothaim
    Journal of Big Data, 9
  • [10] Knowledge-Grounded Dialogue Generation with Pre-trained Language Models
    Zhao, Xueliang
    Wu, Wei
    Xu, Can
    Tao, Chongyang
    Zhao, Dongyan
    Yan, Rui
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 3377 - 3390