AI Trust: Can Explainable AI Enhance Warranted Trust?

被引:2
|
作者
Duarte, Regina de Brito [1 ]
Correia, Filipa [2 ]
Arriaga, Patricia [3 ]
Paiva, Ana [1 ]
机构
[1] Univ Tecn Lisboa, INESC ID, Inst Super Tecn, Lisbon, Portugal
[2] Univ Lisbon, Interact Technol Inst, LARSyS, Inst Super Tecn, Lisbon, Portugal
[3] Inst Univ Lisboa IUL, ISCTE, CIS, Lisbon, Portugal
基金
欧盟地平线“2020”;
关键词
D O I
10.1155/2023/4637678
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Explainable artificial intelligence (XAI), known to produce explanations so that predictions from AI models can be understood, is commonly used to mitigate possible AI mistrust. The underlying premise is that the explanations of the XAI models enhance AI trust. However, such an increase may depend on many factors. This article examined how trust in an AI recommendation system is affected by the presence of explanations, the performance of the system, and the level of risk. Our experimental study, conducted with 215 participants, has shown that the presence of explanations increases AI trust, but only in certain conditions. AI trust was higher when explanations with feature importance were provided than with counterfactual explanations. Moreover, when the system performance is not guaranteed, the use of explanations seems to lead to an overreliance on the system. Lastly, system performance had a stronger impact on trust, compared to the effects of other factors (explanation and risk).
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Explainable AI: introducing trust and comprehensibility to AI engineering
    Burkart, Nadia
    Brajovic, Danilo
    Huber, Marco F.
    AT-AUTOMATISIERUNGSTECHNIK, 2022, 70 (09) : 787 - 792
  • [2] Can Explainable AI Foster Trust in a Customer Dialogue System?
    Stoll, Elena
    Urban, Adam
    Ballin, Philipp
    Kammer, Dietrich
    PROCEEDINGS OF THE WORKING CONFERENCE ON ADVANCED VISUAL INTERFACES AVI 2022, 2022,
  • [3] Can We Trust AI?
    Huntington, Mark K.
    FAMILY MEDICINE, 2025, 57 (01) : 1 - 2
  • [4] Can We Trust AI?
    Herman, Liz
    Chellappa, Rama
    Niiler, Eric
    TECHNICAL COMMUNICATION, 2023, 70 (03)
  • [5] Exploration of Explainable AI for Trust Development on Human-AI Interaction
    Bernardo, Ezekiel L.
    Seva, Rosemary R.
    PROCEEDINGS OF 2023 6TH ARTIFICIAL INTELLIGENCE AND CLOUD COMPUTING CONFERENCE, AICCC 2023, 2023, : 238 - 246
  • [6] Trust Indicators and Explainable AI: A Study on User Perceptions
    Ribes, Delphine
    Henchoz, Nicolas
    Portier, Helene
    Defayes, Lara
    Thanh-Trung Phan
    Gatica-Perez, Daniel
    Sonderegger, Andreas
    HUMAN-COMPUTER INTERACTION, INTERACT 2021, PT II, 2021, 12933 : 662 - 671
  • [7] Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption
    Bedue, Patrick
    Fritzsche, Albrecht
    JOURNAL OF ENTERPRISE INFORMATION MANAGEMENT, 2022, 35 (02) : 530 - 549
  • [8] Explainable AI: Towards Fairness, Accountability, Transparency and Trust in Healthcare
    Shaban-Nejad, Arash
    Michalowski, Martin
    Brownstein, John
    Buckeridge, David
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (07) : 2374 - 2375
  • [9] Explainable AI for Prostate MRI: Don't Trust, Verify
    Chapiro, Julius
    RADIOLOGY, 2023, 307 (04)
  • [10] The Quest for Explainable AI and the Role of Trust (Work in Progress Paper)
    Gerdes, Anne
    PROCEEDINGS OF THE EUROPEAN CONFERENCE ON THE IMPACT OF ARTIFICIAL INTELLIGENCE AND ROBOTICS (ECIAIR 2019), 2019, : 465 - 468