Assessing the use of attention weights to interpret BERT-based stance classification

被引:3
|
作者
Cordova Saenz, Carlos Abel [1 ]
Becker, Karin [1 ]
机构
[1] Fed Univ Rio Grande do Sul UFRGS, Inst Informat, Porto Alegre, RS, Brazil
来源
2021 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY (WI-IAT 2021) | 2021年
关键词
BERT; interpretability; stance classification; BERT attention weights;
D O I
10.1145/3486622.3493966
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
BERT models are currently state-of-the-art solutions for various tasks, including stance classification. However, these models are a black box for their users. Some proposals have leveraged the weights assigned by the internal attention mechanisms of these models for interpretability purposes. However, whether the attention weights help the interpretability of the model is still a matter of debate, with positions in favor and against. This work proposes an attention-based interpretability mechanism to identify the most influential words for stances predicted using BERT-based models. We target stances expressed in Twitter using the Portuguese language and assess the proposed mechanism using a case study regarding stances on COVID-19 vaccination in the Brazilian context. The interpretation mechanism traces tokens' attentions back to words, assigning a newly proposed metric referred to as absolute word attention. Through this metric, we assess several aspects to determine if we can find important words for the classification and with meaning for the domain. We developed a broad experimental setting that involved three datasets with tweets in Brazilian Portuguese and three BERT models with support for this language. Our results are encouraging, as we were able to identify 52-82% of words with high absolute attention contributing positively to stance classification. The interpretability mechanism proved to be helpful to understand the influence of words in the classification, and they revealed intrinsic properties of the domain and representative arguments of the stances.
引用
收藏
页码:194 / 201
页数:8
相关论文
共 50 条
  • [21] A Study of BERT-Based Classification Performance of Text-Based Health Counseling Data
    Sung, Yeol Woo
    Park, Dae Seung
    Kim, Cheong Ghil
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 135 (01): : 795 - 808
  • [22] A Fine-Tuned BERT-Based Transfer Learning Approach for Text Classification
    Qasim, Rukhma
    Bangyal, Waqas Haider
    Alqarni, Mohammed A.
    Almazroi, Abdulwahab Ali
    JOURNAL OF HEALTHCARE ENGINEERING, 2022, 2022
  • [23] FF-BERT: A BERT-based ensemble for automated classification of web-based text on flash flood events
    Wilkho, Rohan Singh
    Chang, Shi
    Gharaibeh, Nasir G.
    ADVANCED ENGINEERING INFORMATICS, 2024, 59
  • [24] BBAEG: Towards BERT-based Biomedical Adversarial Example Generation for Text Classification
    Mondal, Ishani
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 5378 - 5384
  • [25] A Fine-Tuned BERT-Based Transfer Learning Approach for Text Classification
    Qasim, Rukhma
    Bangyal, Waqas Haider
    Alqarni, Mohammed A. A.
    Almazroi, Abdulwahab Ali
    JOURNAL OF HEALTHCARE ENGINEERING, 2022, 2022
  • [26] BERT-based Regression Model for Micro-edit Humor Classification Task
    Chen, Yuancheng
    Hou, Yi
    Ye, Deqiang
    Yu, Yuehang
    2021 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, INFORMATION AND COMMUNICATION ENGINEERING, 2021, 11933
  • [27] BERT-Based Models with Attention Mechanism and Lambda Layer for Biomedical Named Entity Recognition
    Shi, Yuning
    Kimura, Masaomi
    2024 16TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, ICMLC 2024, 2024, : 536 - 544
  • [28] BERT-based chinese text classification for emergency management with a novel loss function
    Wang, Zhongju
    Wang, Long
    Huang, Chao
    Sun, Shutong
    Luo, Xiong
    APPLIED INTELLIGENCE, 2023, 53 (09) : 10417 - 10428
  • [29] Make BERT-based Chinese Spelling Check Model Enhanced by Layerwise Attention and Gaussian
    Cao, Yongchang
    He, Liang
    Wu, Zhen
    Dai, Xinyu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [30] Span-Level Emotion Cause Analysis by BERT-based Graph Attention Network
    Li, Xiangju
    Gao, Wei
    Feng, Shi
    Wang, Daling
    Joty, Shafiq
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3221 - 3226