"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

被引:48
|
作者
Kim, Sunnie S. Y. [1 ]
Watkins, Elizabeth Anne [2 ]
Russakovsky, Olga [1 ]
Fong, Ruth [1 ]
Monroy-Hernandez, Andres [1 ]
机构
[1] Princeton Univ, Princeton, NJ 08544 USA
[2] Intel Labs, Santa Clara, CA USA
基金
美国国家科学基金会;
关键词
Explainable AI (XAI); Interpretability; Human-Centered XAI; HumanAI Interaction; Human-AI Collaboration; XAI for Computer Vision; Local Explanations; BLACK-BOX; INTELLIGENCE;
D O I
10.1145/3544548.3581001
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Supporting Human-AI Teams:Transparency, explainability, and situation awareness
    Endsley, Mica R.
    COMPUTERS IN HUMAN BEHAVIOR, 2023, 140
  • [22] Supporting Human-AI Teams:Transparency, explainability, and situation awareness
    Endsley, Mica R.
    COMPUTERS IN HUMAN BEHAVIOR, 2023, 140
  • [23] Assessing Human-AI Interaction Early through Factorial Surveys: A Study on the Guidelines for Human-AI Interaction
    Li, Tianyi
    Vorvoreanu, Mihaela
    DeBellis, Derek
    Amershi, Saleema
    ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, 2023, 30 (05)
  • [24] Understanding anthropomorphic voice-AI chatbot continuance from a human-AI interaction perspective
    Xie, Wei
    Yang, Shuiqing
    Li, Yixiao
    Zhou, Shasha
    BEHAVIOUR & INFORMATION TECHNOLOGY, 2024,
  • [25] AI-Driven Personalization to Support Human-AI Collaboration
    Conati, Cristina
    COMPANION OF THE 2024 ACM SIGCHI SYMPOSIUM ON ENGINEERING INTERACTIVE COMPUTING SYSTEMS, EICS 2024, 2024, : 5 - 6
  • [26] Human-aware AI -A foundational framework for human-AI interaction
    Sreedharan, Sarath
    AI MAGAZINE, 2023, 44 (04) : 460 - 466
  • [27] Human-AI Interaction: Human Behavior Routineness Shapes AI Performance
    Sun, Tianao
    Zhao, Kai
    Chen, Meng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 8476 - 8487
  • [28] Exploration of Explainable AI for Trust Development on Human-AI Interaction
    Bernardo, Ezekiel L.
    Seva, Rosemary R.
    PROCEEDINGS OF 2023 6TH ARTIFICIAL INTELLIGENCE AND CLOUD COMPUTING CONFERENCE, AICCC 2023, 2023, : 238 - 246
  • [29] Interpretability as a dynamic of human-AI interaction
    Thieme A.
    Cutrell E.
    Morrison C.
    Taylor A.
    Sellen A.
    Interactions, 2020, 27 (05) : 40 - 45
  • [30] Theory of Mind in Human-AI Interaction
    Wang, Qiaosi
    Walsh, Sarah E.
    Si, Mei
    Kephart, Jeffrey O.
    Weisz, Justin D.
    Goel, Ashok K.
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,