"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

被引:48
|
作者
Kim, Sunnie S. Y. [1 ]
Watkins, Elizabeth Anne [2 ]
Russakovsky, Olga [1 ]
Fong, Ruth [1 ]
Monroy-Hernandez, Andres [1 ]
机构
[1] Princeton Univ, Princeton, NJ 08544 USA
[2] Intel Labs, Santa Clara, CA USA
基金
美国国家科学基金会;
关键词
Explainable AI (XAI); Interpretability; Human-Centered XAI; HumanAI Interaction; Human-AI Collaboration; XAI for Computer Vision; Local Explanations; BLACK-BOX; INTELLIGENCE;
D O I
10.1145/3544548.3581001
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Toward Human-AI Interfaces to Support Explainability and Causability in Medical AI
    Holzinger, Andreas
    Mueller, Heimo
    COMPUTER, 2021, 54 (10) : 78 - 86
  • [2] Does My AI Help or Hurt? Exploring Human-AI Complementarity
    Inkpen, Kori
    UMAP'20: PROCEEDINGS OF THE 28TH ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, 2020, : 2 - 2
  • [3] Understanding the influence of AI autonomy on AI explainability levels in human-AI teams using a mixed methods approach
    Hauptman, Allyson I.
    Schelble, Beau G.
    Duan, Wen
    Flathmann, Christopher
    Mcneese, Nathan J.
    COGNITION TECHNOLOGY & WORK, 2024, 26 (03) : 435 - 455
  • [4] Human-AI Interaction and AI Avatars
    Liu, Yuxin
    Siau, Keng L.
    HCI INTERNATIONAL 2023 LATE BREAKING PAPERS, HCII 2023, PT VI, 2023, 14059 : 120 - 130
  • [5] Human-AI interaction
    Sun, Yongqiang
    Shen, Xiao-Liang
    Zhang, Kem Z.K.
    Data and Information Management, 2023, 7 (03)
  • [6] Helping Teachers Help Their Students: A Human-AI Hybrid Approach
    Paiva, Ranilson
    Bittencourt, Ig Ibert
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2020), PT I, 2020, 12163 : 448 - 459
  • [7] Evaluating Interactive AI: Understanding and Controlling Placebo Effects in Human-AI Interaction
    Villa, Steeven
    Welsch, Robin
    Denisova, Alena
    Kosch, Thomas
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [8] Ironies of Generative AI: Understanding and Mitigating Productivity Loss in Human-AI Interaction
    Simkute, Auste
    Tankelevitch, Lev
    Kewenig, Viktor
    Scott, Ava Elizabeth
    Sellen, Abigail
    Rintel, Sean
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2025, 41 (05) : 2898 - 2919
  • [9] How AI can help us beat AMR
    Autumn Arnold
    Stewart McLellan
    Jonathan M. Stokes
    npj Antimicrobials and Resistance, 3 (1):
  • [10] How Can AI Help Improve Food Safety?
    Qian, C.
    Murphy, S., I
    Orsi, R. H.
    Wiedmann, M.
    ANNUAL REVIEW OF FOOD SCIENCE AND TECHNOLOGY, 2023, 14 : 517 - 538