Evaluating the Faithfulness of Causality in Saliency-Based Explanations of Deep Learning Models for Temporal Colour Constancy

被引:0
|
作者
Rizzo, Matteo [1 ]
Conati, Cristina [2 ]
Jang, Daesik [3 ]
Hu, Hui [4 ]
机构
[1] Ca Foscari Univ Venice, Dept Environm Sci Informat & Stat, I-30172 Venice, VE, Italy
[2] Univ British Columbia, Fac Sci, Dept Comp Sci, 2366 Main Mall 201, Vancouver, BC V6T 1Z4, Canada
[3] Huawei Vancouver, 4321 Still Creek Dr, Burnaby, BC, Canada
[4] Huawei, W Changan St,11th Floor,Beijing Capital Times Sq, Beijing 100031, Peoples R China
关键词
Explainability; Black-box; Faithfulness; Saliency; Attention;
D O I
10.1007/978-3-031-63800-8_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help better understand the decision-making process of black-box models. However, some recent works challenged the faithfulness of in-model saliency in Natural Language Processing (NLP), questioning the causality relationship between the highlights provided by attention weight and the model prediction. More generally, the adherence of attention weights to the actual decision-making process of the model, a property called faithfulness, was oppugned. We add to this discussion by evaluating the faithfulness of causality for in-model saliency applied to a video processing task for the first time, namely, temporal color constancy. We assess by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention does not offer causal faithfulness, while confidence, a particular type of in-model visual saliency, does.
引用
收藏
页码:125 / 142
页数:18
相关论文
共 22 条
  • [1] Evaluating the faithfulness of saliency maps in explaining deep learning models using realistic perturbations
    Amorim, Jose P.
    Abreu, Pedro H.
    Santos, Joao
    Cortes, Marc
    Vila, Victor
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (02)
  • [2] Saliency-Based Artistic Abstraction With Deep Learning and Regression Trees
    Shakeri, Hanieh
    Nixon, Michael
    DiPaola, Steve
    JOURNAL OF IMAGING SCIENCE AND TECHNOLOGY, 2017, 61 (06)
  • [3] Deep learning and saliency-based parking IoT classification under different weather conditions
    Mago, Neeru
    Mittal, Mamta
    Hemanth, D. Jude
    Sharma, Rakhee
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2024, 18 (02): : 1411 - 1424
  • [4] Evaluating Quality of Visual Explanations of Deep Learning Models for Vision Tasks
    Yang, Yuqing
    Mahmoudpour, Saeed
    Schelkens, Peter
    Deligiannis, Nikos
    2023 15TH INTERNATIONAL CONFERENCE ON QUALITY OF MULTIMEDIA EXPERIENCE, QOMEX, 2023, : 159 - 164
  • [5] Improve the Deep Learning Models in Forestry Based on Explanations and Expertise
    Cheng, Ximeng
    Doosthosseini, Ali
    Kunkel, Julian
    FRONTIERS IN PLANT SCIENCE, 2022, 13
  • [6] Quantitative evaluation of Saliency-Based Explainable artificial intelligence (XAI) methods in Deep Learning-Based mammogram analysis
    Cerekci, Esma
    Alis, Deniz
    Denizoglu, Nurper
    Camurdan, Ozden
    Seker, Mustafa Ege
    Ozer, Caner
    Hansu, Muhammed Yusuf
    Tanyel, Toygar
    Oksuz, Ilkay
    Karaarslan, Ercan
    EUROPEAN JOURNAL OF RADIOLOGY, 2024, 173
  • [7] Evaluating Uncertainty-Based Deep Learning Explanations for Prostate Lesion Detection
    Trombley, Christopher M.
    Gulum, Mehmet Akif
    Ozen, Merve
    Esen, Enes
    Aksamoglu, Melih
    Kantardzic, Mehmed
    MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 182, 2022, 182 : 874 - 891
  • [8] SIFT-Guided Saliency-Based Augmentation for Weed Detection in Grassland Images: Fusing Classic Computer Vision with Deep Learning
    Schmidt, Patrick
    Guldenring, Ronja
    Nalpantidis, Lazaros
    COMPUTER VISION SYSTEMS, ICVS 2023, 2023, 14253 : 137 - 147
  • [9] Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models
    An, Junkang
    Zhang, Yiwan
    Joe, Inwhee
    APPLIED SCIENCES-BASEL, 2023, 13 (15):
  • [10] Explaining deep learning-based activity schedule models using SHapley Additive exPlanations
    Koushik, Anil
    Manoj, M.
    Nezamuddin, N.
    TRANSPORTATION LETTERS-THE INTERNATIONAL JOURNAL OF TRANSPORTATION RESEARCH, 2024,