AI Trust: Can Explainable AI Enhance Warranted Trust?

被引:2
|
作者
Duarte, Regina de Brito [1 ]
Correia, Filipa [2 ]
Arriaga, Patricia [3 ]
Paiva, Ana [1 ]
机构
[1] Univ Tecn Lisboa, INESC ID, Inst Super Tecn, Lisbon, Portugal
[2] Univ Lisbon, Interact Technol Inst, LARSyS, Inst Super Tecn, Lisbon, Portugal
[3] Inst Univ Lisboa IUL, ISCTE, CIS, Lisbon, Portugal
基金
欧盟地平线“2020”;
关键词
D O I
10.1155/2023/4637678
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Explainable artificial intelligence (XAI), known to produce explanations so that predictions from AI models can be understood, is commonly used to mitigate possible AI mistrust. The underlying premise is that the explanations of the XAI models enhance AI trust. However, such an increase may depend on many factors. This article examined how trust in an AI recommendation system is affected by the presence of explanations, the performance of the system, and the level of risk. Our experimental study, conducted with 215 participants, has shown that the presence of explanations increases AI trust, but only in certain conditions. AI trust was higher when explanations with feature importance were provided than with counterfactual explanations. Moreover, when the system performance is not guaranteed, the use of explanations seems to lead to an overreliance on the system. Lastly, system performance had a stronger impact on trust, compared to the effects of other factors (explanation and risk).
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Something Fast and Cheap or A Core Element of Building Trust? - AI Auditing Professionals’ Perspectives on the Role of AI Audits in Trust in AI
    Lassiter, Tina B.
    Fleischmann, Kenneth R.
    Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (CSCW2)
  • [42] Can I trust my AI friend? The role of emotions, feelings of friendship and trust for consumers' information-sharing behavior toward AI
    Pelau, Corina
    Dabija, Dan-Cristian
    Stanescu, Mihaela
    OECONOMIA COPERNICANA, 2024, 15 (02) : 407 - 433
  • [43] Learning to Comprehend and Trust Artificial Intelligence Outcomes: A Conceptual Explainable AI Evaluation Framework
    Love P.E.D.
    Matthews J.
    Fang W.
    Porter S.
    Luo H.
    Ding L.
    IEEE Engineering Management Review, 2024, 52 (01): : 230 - 247
  • [44] Trust does not need to be human: it is possible to trust medical AI
    Ferrario, Andrea
    Loi, Michele
    Vigano, Eleonora
    JOURNAL OF MEDICAL ETHICS, 2021, 47 (06) : 437 - 438
  • [45] The Digital Trust Imperative: AI's Impact on Digital Trust
    Brian Kelley, K.
    ISACA Journal, 2024, 1 : 9 - 11
  • [46] How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review
    Rosenbacke, Rikard
    Melhus, Asa
    Mckee, Martin
    Stuckler, David
    JMIR AI, 2024, 3
  • [47] Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization
    Wall, Emily
    Matzen, Laura
    El-Assady, Mennatallah
    Masters, Peta
    Hosseinpour, Helia
    Endert, Alex
    Borgo, Rita
    Chau, Polo
    Perer, Adam
    Schupp, Harald
    Strobelt, Hendrik
    Padilla, Lace
    2024 IEEE 17TH PACIFIC VISUALIZATION CONFERENCE, PACIFICVIS, 2024, : 22 - 31
  • [48] In AI We Trust: The Interplay of Media Use, Political Ideology, and Trust in Shaping Emerging AI Attitudes
    Yang, Shiyu
    Krause, Nicole M.
    Bao, Luye
    Calice, Mikhaila N.
    Newman, Todd P.
    Scheufele, Dietram A.
    Xenos, Michael A.
    Brossard, Dominique
    JOURNALISM & MASS COMMUNICATION QUARTERLY, 2023,
  • [49] How Can I Trust AI? : Extending a UXer-AI Collaboration Process in the Early Stages
    Yoon, Harin
    Oh, Changhoon
    Jun, Soojin
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [50] Improving Trust in AI with Mitigating Confirmation Bias: Effects of Explanation Type and Debiasing Strategy for Decision-Making with Explainable AI
    Ha, Taehyun
    Kim, Sangyeon
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024, 40 (24) : 8562 - 8573