Detecting Sarcasm in Conversation Context Using Transformer-Based Models

被引:0
|
作者
Avvaru, Adithya [1 ,2 ]
Vobilisetty, Sanath [2 ]
Mamidi, Radhika [1 ]
机构
[1] Int Inst Informat Technol, Hyderabad, India
[2] Teradata India Pvt Ltd, Mumbai, Maharashtra, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sarcasm detection, regarded as one of the subproblems of sentiment analysis, is a very typical task because the introduction of sarcastic words can flip the sentiment of the sentence itself. To date, many research works revolve around detecting sarcasm in one single sentence and there is very limited research to detect sarcasm resulting from multiple sentences. Current models used Long Short Term Memory (Hochreiter and Schmidhuber, 1997) (LSTM) variants with or without attention to detect sarcasm in conversations. We showed that the models using state-of-the-art Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) (BERT), to capture syntactic and semantic information across conversation sentences, performed better than the current models. Based on the data analysis, we estimated that the number of sentences in the conversation that can contribute to the sarcasm and the results agrees to this estimation. We also perform a comparative study of our different versions of BERT-based model with other variants of LSTM model and XLNet (Yang et al, 2019) (both using the estimated number of conversation sentences) and find out that BERT-based models outperformed them.
引用
收藏
页码:98 / 103
页数:6
相关论文
共 50 条
  • [1] Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media
    Dong, Xiangjue
    Li, Changmao
    Choi, Jinho D.
    FIGURATIVE LANGUAGE PROCESSING, 2020, : 276 - 280
  • [2] A transformer-based approach to irony and sarcasm detection
    Rolandos Alexandros Potamias
    Georgios Siolas
    Andreas - Georgios Stafylopatis
    Neural Computing and Applications, 2020, 32 : 17309 - 17320
  • [3] Sarcasm Analysis Using Conversation Context
    Ghosh, Debanjan
    Fabbri, Alexander R.
    Muresan, Smaranda
    COMPUTATIONAL LINGUISTICS, 2018, 44 (04) : 755 - 792
  • [4] A transformer-based approach to irony and sarcasm detection
    Potamias, Rolandos Alexandros
    Siolas, Georgios
    Stafylopatis, Andreas-Georgios
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (23): : 17309 - 17320
  • [5] Neural Sarcasm Detection using Conversation Context
    Jaiswal, Nikhil
    FIGURATIVE LANGUAGE PROCESSING, 2020, : 77 - 82
  • [6] In-Context Learning for MIMO Equalization Using Transformer-Based Sequence Models
    Zecchin, Matteo
    Yu, Kai
    Simeone, Osvaldo
    2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS 2024, 2024, : 1573 - 1578
  • [7] Detecting Bot on GitHub Leveraging Transformer-based Models: A Preliminary Study
    Zhang, Jin
    Wu, Xingjin
    Zhang, Yang
    Xu, Shunyu
    PROCEEDINGS OF THE 2023 30TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC 2023, 2023, : 639 - 640
  • [8] Transformer-Based Word Embedding With CNN Model to Detect Sarcasm and Irony
    Ravinder Ahuja
    S. C. Sharma
    Arabian Journal for Science and Engineering, 2022, 47 : 9379 - 9392
  • [9] Transformer-Based Word Embedding With CNN Model to Detect Sarcasm and Irony
    Ahuja, Ravinder
    Sharma, S. C.
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2022, 47 (08) : 9379 - 9392
  • [10] EEG Classification with Transformer-Based Models
    Sun, Jiayao
    Xie, Jin
    Zhou, Huihui
    2021 IEEE 3RD GLOBAL CONFERENCE ON LIFE SCIENCES AND TECHNOLOGIES (IEEE LIFETECH 2021), 2021, : 92 - 93