How Explainable AI Affects Human Performance: A Systematic Review of the Behavioural Consequences of Saliency Maps

被引:0
|
作者
Mueller, Romy [1 ]
机构
[1] TUD Dresden Univ Technol, Fac Psychol, Chair Engn Psychol & Appl Cognit Res, Dresden, Germany
关键词
Explainable artificial intelligence; attribution methods; saliency maps; image classification; deep neural networks; user studies; human performance; AUTOMATION; CAD;
D O I
10.1080/10447318.2024.2381929
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Saliency maps can explain how deep neural networks classify images. But are they actually useful for humans? The present systematic review of 68 user studies found that while saliency maps can enhance human performance, null effects or even costs are quite common. To investigate what modulates these effects, the empirical outcomes were organised along several factors related to the human tasks, AI performance, XAI methods, images to be classified, human participants and comparison conditions. In image-focused tasks, benefits were less common than in AI-focused tasks, but the effects depended on the specific cognitive requirements. AI accuracy strongly modulated the outcomes, while XAI-related factors had surprisingly little impact. The evidence was limited for image- and human-related factors and the effects were highly dependent on the comparisons. These findings may support the design of future user studies by focusing on the conditions under which saliency maps can potentially be useful.
引用
收藏
页码:2020 / 2051
页数:32
相关论文
共 50 条
  • [21] On the road to explainable AI in drug-drug interactions prediction: A systematic review
    Vo, Thanh Hoa
    Nguyen, Ngan Thi Kim
    Kha, Quang Hien
    Le, Nguyen Quoc Khanh
    COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL, 2022, 20 : 2112 - 2123
  • [22] Human Behavioural Traits and the Polycrisis: A Systematic Review
    King, Nick
    Jones, Aled
    SUSTAINABILITY, 2025, 17 (04)
  • [23] From explainable to interactive AI: A literature review on current trends in human-AI interaction
    Raees, Muhammad
    Meijerink, Inge
    Lykourentzou, Ioanna
    Khan, Vassilis-Javed
    Papangelis, Konstantinos
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2024, 189
  • [24] How relative performance information affects employee behavior: a systematic review of empirical research
    Schnieder, Christian
    JOURNAL OF ACCOUNTING LITERATURE, 2022, 44 (01) : 72 - 107
  • [25] Explainable Artificial Intelligence (XAI): How the Visualization of AI Predictions Affects User Cognitive Load and Confidence
    Hudon, Antoine
    Demazure, Theophile
    Karran, Alexander
    Leger, Pierre-Majorique
    Senecal, Sylvain
    INFORMATION SYSTEMS AND NEUROSCIENCE (NEUROIS RETREAT 2021), 2021, 52 : 237 - 246
  • [26] A systematic review and applications of how AI evolved in healthcare
    Divya, K.
    Kannadasan, R.
    OPTICAL AND QUANTUM ELECTRONICS, 2024, 56 (03)
  • [27] Revealing the role of explainable AI: How does updating AI applications generate agility-driven performance?
    Masialeti, Masialeti
    Talaei-Khoei, Amir
    Yang, Alan T.
    INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2024, 77
  • [28] From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
    Nauta, Meike
    Trienes, Jan
    Pathak, Shreyasi
    Nguyen, Elisa
    Peters, Michelle
    Schmitt, Yasmin
    Schloetterer, Joerg
    Van Keulen, Maurice
    Seifert, Christin
    ACM COMPUTING SURVEYS, 2023, 55 (13S)
  • [29] Contributions of Zebrafish Studies to the Behavioural Consequences of Early Alcohol Exposure: A Systematic Review
    Schaidhauer, Flavia Gheller
    Caetano, Higor Arruda
    da Silva, Guilherme Pietro
    da Silva, Rosane Souza
    CURRENT NEUROPHARMACOLOGY, 2022, 20 (03) : 579 - 593
  • [30] Interaction between Bottom-up Saliency and Top-down Control: How Saliency Maps Are Created in the Human Brain
    Melloni, Lucia
    van Leeuwen, Sara
    Alink, Arjen
    Mueller, Notger G.
    CEREBRAL CORTEX, 2012, 22 (12) : 2943 - 2952