Detecting Sarcasm in Conversation Context Using Transformer-Based Models

被引:0
|
作者
Avvaru, Adithya [1 ,2 ]
Vobilisetty, Sanath [2 ]
Mamidi, Radhika [1 ]
机构
[1] Int Inst Informat Technol, Hyderabad, India
[2] Teradata India Pvt Ltd, Mumbai, Maharashtra, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sarcasm detection, regarded as one of the subproblems of sentiment analysis, is a very typical task because the introduction of sarcastic words can flip the sentiment of the sentence itself. To date, many research works revolve around detecting sarcasm in one single sentence and there is very limited research to detect sarcasm resulting from multiple sentences. Current models used Long Short Term Memory (Hochreiter and Schmidhuber, 1997) (LSTM) variants with or without attention to detect sarcasm in conversations. We showed that the models using state-of-the-art Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) (BERT), to capture syntactic and semantic information across conversation sentences, performed better than the current models. Based on the data analysis, we estimated that the number of sentences in the conversation that can contribute to the sarcasm and the results agrees to this estimation. We also perform a comparative study of our different versions of BERT-based model with other variants of LSTM model and XLNet (Yang et al, 2019) (both using the estimated number of conversation sentences) and find out that BERT-based models outperformed them.
引用
收藏
页码:98 / 103
页数:6
相关论文
共 50 条
  • [31] Empirical Study of Tweets Topic Classification Using Transformer-Based Language Models
    Mandal, Ranju
    Chen, Jinyan
    Becken, Susanne
    Stantic, Bela
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2021, 2021, 12672 : 340 - 350
  • [32] Using transformer-based models and social media posts for heat stroke detection
    Anno, Sumiko
    Kimura, Yoshitsugu
    Sugita, Satoru
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [33] Classifying Drug Ratings Using User Reviews with Transformer-Based Language Models
    Shiju, Akhil
    He, Zhe
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 163 - 169
  • [34] Named Entity Recognition in Cyber Threat Intelligence Using Transformer-based Models
    Evangelatos, Pavlos
    Iliou, Christos
    Mavropoulos, Thanassis
    Apostolou, Konstantinos
    Tsikrika, Theodora
    Vrochidis, Stefanos
    Kompatsiaris, Ioannis
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 348 - 353
  • [35] Detection of Depression Severity in Social Media Text Using Transformer-Based Models
    Qasim, Amna
    Mehak, Gull
    Hussain, Nisar
    Gelbukh, Alexander
    Sidorov, Grigori
    INFORMATION, 2025, 16 (02)
  • [36] Automatic summarization of cooking videos using transfer learning and transformer-based models
    P. M. Alen Sadique
    R. V. Aswiga
    Discover Artificial Intelligence, 5 (1):
  • [37] Ouroboros: On Accelerating Training of Transformer-Based Language Models
    Yang, Qian
    Huo, Zhouyuan
    Wang, Wenlin
    Huang, Heng
    Carin, Lawrence
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [38] Transformer-based Models for Arabic Online Handwriting Recognition
    Alwajih, Fakhraddin
    Badr, Eman
    Abdou, Sherif
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (05) : 898 - 905
  • [39] Blockwise compression of transformer-based models without retraining
    Dong, Gaochen
    Chen, W.
    NEURAL NETWORKS, 2024, 171 : 423 - 428
  • [40] Transformer-Based Language Models for Software Vulnerability Detection
    Thapa, Chandra
    Jang, Seung Ick
    Ahmed, Muhammad Ejaz
    Camtepe, Seyit
    Pieprzyk, Josef
    Nepal, Surya
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 481 - 496