Improving understandability of feature contributions in model-agnostic explainable AI tools

被引:15
|
作者
Hadash, Sophia [1 ,2 ]
Willemsen, Martijn C. [1 ,2 ]
Snijders, Chris [2 ]
IJsselsteijn, Wijnand A. [2 ]
机构
[1] Jheronimus Acad Data Sci, sHertogenbosch, Noord Brabant, Netherlands
[2] Eindhoven Univ Technol, Human Technol Interact Dept, NL-5600 MB Eindhoven, Netherlands
关键词
interpretable machine learning; explanations; argumentation; natural language;
D O I
10.1145/3491102.3517650
中图分类号
学科分类号
摘要
Model-agnostic explainable AI tools explain their predictions by means of 'local' feature contributions. We empirically investigate two potential improvements over current approaches. The first one is to always present feature contributions in terms of the contribution to the outcome that is perceived as positive by the user ("positive framing"). The second one is to add "semantic labeling", that explains the directionality of each feature contribution ("this feature leads to +5% eligibility"), reducing additional cognitive processing steps. In a user study, participants evaluated the understandability of explanations for different framing and labeling conditions for loan applications and music recommendations. We found that positive framing improves understandability even when the prediction is negative. Additionally, adding semantic labels eliminates any framing effects on understandability, with positive labels outperforming negative labels. We implemented our suggestions in a package ArgueView[11].
引用
收藏
页数:9
相关论文
共 47 条
  • [1] Computational Evaluation of Model-Agnostic Explainable AI Using Local Feature Importance in Healthcare
    Erdeniz, Seda Polat
    Schrempf, Michael
    Kramer, Diether
    Rainer, Peter P.
    Felfernig, Alexander
    Tran, Trang
    Burgstaller, Tamim
    Lubos, Sebastian
    ARTIFICIAL INTELLIGENCE IN MEDICINE, AIME 2023, 2023, 13897 : 114 - 119
  • [2] A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI
    Barbalau, Antonio
    Cosma, Adrian
    Ionescu, Radu Tudor
    Popescu, Marius
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 190 - 205
  • [3] Assessment of Software Vulnerability Contributing Factors by Model-Agnostic Explainable AI
    Li, Ding
    Liu, Yan
    Huang, Jun
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (02): : 1087 - 1113
  • [4] Explainable Model-Agnostic Similarity and Confidence in Face Verification
    Knoche, Martin
    Teepe, Torben
    Hoermann, Stefan
    Rigoll, Gerhard
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2023, : 711 - 718
  • [5] Sharpening Local Interpretable Model-Agnostic Explanations for Histopathology: Improved Understandability and Reliability
    Graziani, Mara
    de Sousa, Iam Palatnik
    Vellasco, Marley M. B. R.
    da Silva, Eduardo Costa
    Muller, Henning
    Andrearczyk, Vincent
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 : 540 - 549
  • [6] Model-agnostic interpretation via feature perturbation visualization
    Marcilio Junior, Wilson E.
    Eler, Danilo Medeiros
    Breve, Fabricio
    2023 36TH CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES, SIBGRAPI 2023, 2023, : 19 - 24
  • [7] Model-agnostic explainable artificial intelligence for object detection in image data
    Moradi, Milad
    Yan, Ke
    Colwell, David
    Samwald, Matthias
    Asgari, Rhona
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [8] A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability
    Xu, Zhichao
    Zeng, Hansi
    Tan, Juntao
    Fu, Zuohui
    Zhang, Yongfeng
    Ai, Qingyao
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (01)
  • [9] An Explainable Model-Agnostic Algorithm for CNN-based Biometrics Verification
    Alonso-Fernandez, Fernando
    Hernandez-Diaz, Kevin
    Buades, Jose M.
    Tiwari, Prayag
    Bigun, Josef
    2023 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY, WIFS, 2023,
  • [10] Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data
    Nambiar, Athira
    Harikrishnaa, S.
    Sharanprasath, S.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6