How Explainable AI Affects Human Performance: A Systematic Review of the Behavioural Consequences of Saliency Maps

被引:0
|
作者
Mueller, Romy [1 ]
机构
[1] TUD Dresden Univ Technol, Fac Psychol, Chair Engn Psychol & Appl Cognit Res, Dresden, Germany
关键词
Explainable artificial intelligence; attribution methods; saliency maps; image classification; deep neural networks; user studies; human performance; AUTOMATION; CAD;
D O I
10.1080/10447318.2024.2381929
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Saliency maps can explain how deep neural networks classify images. But are they actually useful for humans? The present systematic review of 68 user studies found that while saliency maps can enhance human performance, null effects or even costs are quite common. To investigate what modulates these effects, the empirical outcomes were organised along several factors related to the human tasks, AI performance, XAI methods, images to be classified, human participants and comparison conditions. In image-focused tasks, benefits were less common than in AI-focused tasks, but the effects depended on the specific cognitive requirements. AI accuracy strongly modulated the outcomes, while XAI-related factors had surprisingly little impact. The evidence was limited for image- and human-related factors and the effects were highly dependent on the comparisons. These findings may support the design of future user studies by focusing on the conditions under which saliency maps can potentially be useful.
引用
收藏
页码:2020 / 2051
页数:32
相关论文
共 50 条
  • [31] How to optimize the systematic review process using AI tools
    Fabiano, Nicholas
    Gupta, Arnav
    Bhambra, Nishaant
    Luu, Brandon
    Wong, Stanley
    Maaz, Muhammad
    Fiedorowicz, Jess G.
    Smith, Andrew L.
    Solmi, Marco
    JCPP ADVANCES, 2024, 4 (02):
  • [32] HOW CAN EXPLAINABLE ARTIFICIAL INTELLIGENCE ACCELERATE THE SYSTEMATIC LITERATURE REVIEW PROCESS?
    Abogunrin, S.
    Bagavathiappan, S. K.
    Kumaresan, S.
    Lane, M.
    Oliver, G.
    Witzmann, A.
    VALUE IN HEALTH, 2023, 26 (06) : S293 - S293
  • [33] Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It
    Hafeez, Yasir
    Memon, Khuhed
    AL-Quraishi, Maged S.
    Yahya, Norashikin
    Elferik, Sami
    Ali, Syed Saad Azhar
    DIAGNOSTICS, 2025, 15 (02)
  • [34] A review of explainable AI in the satellite data, deep machine learning, and human poverty domain
    Hall, Ola
    Ohlsson, Mattias
    Rognvaldsson, Thorsteinn
    PATTERNS, 2022, 3 (10):
  • [35] Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance
    Hoffman, Robert R.
    Mueller, Shane T.
    Klein, Gary
    Litman, Jordan
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [36] Explainable Rules and Heuristics in AI Algorithm Recommendation Approaches-A Systematic Literature Review and Mapping Study
    Garcia-Penalvo, Francisco Jose
    Vazquez-Ingelmo, Andrea
    Garcia-Holgado, Alicia
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (02): : 1023 - 1051
  • [37] Retraction Note: A systematic review and applications of how AI evolved in healthcare
    K. Divya
    R. Kannadasan
    Optical and Quantum Electronics, 56 (12)
  • [38] How payment scheme affects patients' adherence to medications? A systematic review
    Aziz, Hamiza
    Hatah, Ernieda
    Bakry, Mohd Makmor
    Islahudin, Farida
    PATIENT PREFERENCE AND ADHERENCE, 2016, 10 : 837 - +
  • [39] How gender affects the pharmacotherapeutic approach to treating psychosis - a systematic review
    Lange, Bettina
    Mueller, Juliane K.
    Leweke, F. Markus
    Bumb, J. Malte
    EXPERT OPINION ON PHARMACOTHERAPY, 2017, 18 (04) : 351 - 362
  • [40] Human-centred AI in industry 5.0: a systematic review
    Passalacqua, Mario
    Pellerin, Robert
    Magnani, Florian
    Doyon-Poulin, Philippe
    Del-Aguila, Laurene
    Boasen, Jared
    Leger, Pierre-Majorique
    INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2024,