How Explainable AI Affects Human Performance: A Systematic Review of the Behavioural Consequences of Saliency Maps

被引:0
|
作者
Mueller, Romy [1 ]
机构
[1] TUD Dresden Univ Technol, Fac Psychol, Chair Engn Psychol & Appl Cognit Res, Dresden, Germany
关键词
Explainable artificial intelligence; attribution methods; saliency maps; image classification; deep neural networks; user studies; human performance; AUTOMATION; CAD;
D O I
10.1080/10447318.2024.2381929
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Saliency maps can explain how deep neural networks classify images. But are they actually useful for humans? The present systematic review of 68 user studies found that while saliency maps can enhance human performance, null effects or even costs are quite common. To investigate what modulates these effects, the empirical outcomes were organised along several factors related to the human tasks, AI performance, XAI methods, images to be classified, human participants and comparison conditions. In image-focused tasks, benefits were less common than in AI-focused tasks, but the effects depended on the specific cognitive requirements. AI accuracy strongly modulated the outcomes, while XAI-related factors had surprisingly little impact. The evidence was limited for image- and human-related factors and the effects were highly dependent on the comparisons. These findings may support the design of future user studies by focusing on the conditions under which saliency maps can potentially be useful.
引用
收藏
页码:2020 / 2051
页数:32
相关论文
共 50 条
  • [1] Human-centered evaluation of explainable AI applications: a systematic review
    Kim, Jenia
    Maathuis, Henry
    Sent, Danielle
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [2] How Displaying AI Confidence Affects Reliance and Hybrid Human-AI Performance
    Tejeda Lemus, Heliodoro
    Kumar, Aakriti
    Steyvers, Mark
    HHAI 2023: AUGMENTING HUMAN INTELLECT, 2023, 368 : 234 - 242
  • [3] Making AI Explainable in the Global South: A Systematic Review
    Okolo, Chinasa T.
    Dell, Nicola
    Vashistha, Aditya
    PROCEEDINGS OF THE 4TH ACM SIGCAS/SIGCHI CONFERENCE ON COMPUTING AND SUSTAINABLE SOCIETIES, COMPASS'22, 2022, : 439 - 452
  • [4] Exploring how AI adoption in the workplace affects employees: a bibliometric and systematic review
    Soulami, Malika
    Benchekroun, Saad
    Galiulina, Asiya
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [5] How Sleep Affects Recovery and Performance in Basketball: A Systematic Review
    Ochoa-Lacar, Javier
    Singh, Meeta
    Bird, Stephen P.
    Charest, Jonathan
    Huyghe, Thomas
    Calleja-Gonzalez, Julio
    BRAIN SCIENCES, 2022, 12 (11)
  • [6] Explainable AI improves task performance in human-AI collaboration
    Senoner, Julian
    Schallmoser, Simon
    Kratzwald, Bernhard
    Feuerriegel, Stefan
    Netland, Torbjorn
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [7] Recent Applications of Explainable AI (XAI): A Systematic Literature Review
    Saarela, Mirka
    Podgorelec, Vili
    APPLIED SCIENCES-BASEL, 2024, 14 (19):
  • [8] Knowledge-graph-based explainable AI: A systematic review
    Rajabi, Enayat
    Etminani, Kobra
    JOURNAL OF INFORMATION SCIENCE, 2024, 50 (04) : 1019 - 1029
  • [9] Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
    Kuznietsov, Anton
    Gyevnar, Balint
    Wang, Cheng
    Peters, Steven
    Albrecht, Stefano V.
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [10] Review of Human-Centered Explainable AI in Healthcare
    Song, Shuchao
    Chen, Yiqiang
    Yu, Hanchao
    Zhang, Yingwei
    Yang, Xiaodong
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2024, 36 (05): : 645 - 657