Examining the effects of power status of an explainable artificial intelligence system on users' perceptions

被引:8
|
作者
Ha, Taehyun [1 ]
Sah, Young June [2 ]
Park, Yuri [3 ]
Lee, Sangwon [4 ]
机构
[1] Korea Inst Sci & Technol Informat, Future Technol Anal Ctr, Seoul, South Korea
[2] Sogang Univ, Sch Media Arts & Sci, Seoul, South Korea
[3] Korea Informat Soc Dev Inst, Dept ICT Ind Res, Jincheon Gun, South Korea
[4] Sungkyunkwan Univ, Dept Human Artificial Intelligence Interact, Dept Interact Sci, 25-2 Sungkyunkwan Ro, Seoul 03063, South Korea
关键词
Explainable artificial intelligence; attribution theory; power status; anthropomorphism; ANTHROPOMORPHISM INCREASES TRUST; SINGLE-ITEM MEASURE; SERVICE FAILURE; ATTRIBUTION; CONSEQUENCES; BEHAVIOR; DETERMINANTS; METAANALYSIS; MOTIVATION; JUDGMENTS;
D O I
10.1080/0144929X.2020.1846789
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Contrary to the traditional concept of artificial intelligence, explainable artificial intelligence (XAI) aims to provide explanations for the prediction results and make users perceive the system as being reliable. However, despite its importance, only a few studies have investigated how the explanations of an XAI system should be designed. This study investigates how people attribute the perceived ability of XAI systems based on perceived attributional qualities and how the power status of the XAI and anthropomorphism affect the attribution process. In a laboratory experiment, participants (N = 500) read a scenarios of using an XAI system with either lower or higher power status and reported their perceptions of the system. Results indicated that an XAI system with a higher power status caused users to perceive the outputs of the XAI system to be more controllable by intention, and higher perceived stability and uncontrollability resulted in greater confidence in the system's ability. The effect of perceived controllability on perceived ability was moderated by the extent to which participants anthropomorphised the system. Several design implications for XAI systems are suggested based on our findings.
引用
收藏
页码:946 / 958
页数:13
相关论文
共 50 条
  • [1] Effects of Explainable Artificial Intelligence in Neurology
    Gombolay, G.
    Silva, A.
    Schrum, M.
    Dutt, M.
    Hallman-Cooper, J.
    Gombolay, M.
    ANNALS OF NEUROLOGY, 2023, 94 : S145 - S145
  • [2] Examining Correlation Between Trust and Transparency with Explainable Artificial Intelligence
    Kartikeya, Arnav
    INTELLIGENT COMPUTING, VOL 2, 2022, 507 : 353 - 358
  • [3] What Are the Users' Needs? Design of a User-Centered Explainable Artificial Intelligence Diagnostic System
    He, Xin
    Hong, Yeyi
    Zheng, Xi
    Zhang, Yong
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2023, 39 (07) : 1519 - 1542
  • [4] Explainable Artificial Intelligence for Intrusion Detection System
    Patil, Shruti
    Varadarajan, Vijayakumar
    Mazhar, Siddiqui Mohd
    Sahibzada, Abdulwodood
    Ahmed, Nihal
    Sinha, Onkar
    Kumar, Satish
    Shaw, Kailash
    Kotecha, Ketan
    ELECTRONICS, 2022, 11 (19)
  • [5] An explainable Artificial Intelligence software system for predicting diabetes
    Srinivasu, Parvathaneni Naga
    Ahmed, Shakeel
    Hassaballah, Mahmoud
    Almusallam, Naif
    HELIYON, 2024, 10 (16)
  • [6] Effects of explainable artificial intelligence in neurology decision support
    Gombolay, Grace Y.
    Silva, Andrew
    Schrum, Mariah
    Gopalan, Nakul
    Hallman-Cooper, Jamika
    Dutt, Monideep
    Gombolay, Matthew
    ANNALS OF CLINICAL AND TRANSLATIONAL NEUROLOGY, 2024, 11 (05): : 1224 - 1235
  • [7] Estimation of Power Generation and Consumption based on eXplainable Artificial Intelligence
    Shin, SooHyun
    Yang, HyoSik
    2023 25TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY, ICACT, 2023, : 201 - 205
  • [8] Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing
    Bauer, Kevin
    von Zahn, Moritz
    Hinz, Oliver
    INFORMATION SYSTEMS RESEARCH, 2023, 34 (04) : 1582 - 1602
  • [9] Artificial Intelligence Employment Interviews: Examining Limitations, Biases, and Perceptions
    Fister, Theresa
    Thiruvathukal, George K.
    COMPUTER, 2024, 57 (10) : 76 - 81
  • [10] Current status and future directions of explainable artificial intelligence in medical imaging
    Saw, Shier Nee
    Yan, Yet Yen
    Ng, Kwan Hoong
    EUROPEAN JOURNAL OF RADIOLOGY, 2025, 183