Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work

被引:6
|
作者
Theis, Sabine [1 ]
Jentzsch, Sophie [1 ]
Deligiannaki, Fotini [2 ]
Berro, Charles [2 ]
Raulf, Arne Peter [2 ]
Bruder, Carmen [3 ]
机构
[1] Inst Software Technol, D-51147 Cologne, Germany
[2] Inst AI Safety & Secur, Rathausallee 12, D-53757 St Augustin, Germany
[3] Inst Aerosp Med, Sportallee 5a, D-22335 Hamburg, Germany
来源
ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT I | 2023年 / 14050卷
关键词
Artificial intelligence; Explainability; Acceptance; Safety-critical contexts; Air-traffic control; Structured literature analysis; Information needs; User requirement analysis; EXPLANATION; FRAMEWORK; INTERNET; MODELS; HEALTH; NEED; USER; AI;
D O I
10.1007/978-3-031-35891-3_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing prevalence of Artificial Intelligence (AI) in safety-critical contexts such as air-traffic control leads to systems that are practical and efficient, and to some extent explainable to humans to be trusted and accepted. The present structured literature analysis examines n = 236 articles on the requirements for the explainability and acceptance of AI. Results include a comprehensive review of n = 48 articles on information people need to perceive an AI as explainable, the information needed to accept an AI, and representation and interaction methods promoting trust in an AI. Results indicate that the two main groups of users are developers who require information about the internal operations of the model and end users who require information about AI results or behavior. Users' information needs vary in specificity, complexity, and urgency and must consider context, domain knowledge, and the user's cognitive resources. The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations, as well as goal-supporting information tailored to individual preferences and information to establish trust in the system. Information about the system's limitations and potential failures can increase acceptance and trust. Trusted interaction methods are human-like, including natural language, speech, text, and visual representations such as graphs, charts, and animations. Our results have significant implications for future human-centric AI systems being developed. Thus, they are suitable as input for further application-specific investigations of user needs.
引用
收藏
页码:355 / 380
页数:26
相关论文
共 50 条
  • [1] Against explainability requirements for ethical artificial intelligence in health care
    Suzanne Kawamleh
    AI and Ethics, 2023, 3 (3): : 901 - 916
  • [2] Explainability and artificial intelligence in medicine
    Reddy, Sandeep
    LANCET DIGITAL HEALTH, 2022, 4 (04):
  • [3] How will we work with artificial intelligence? Collaborative system of human and artificial intelligence
    Chen, Wenjing
    Yang, Yue
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2023, 58 : 652 - 652
  • [4] Designing Explainability of an Artificial Intelligence System
    Ha, Taehyun
    Lee, Sangwon
    Kim, Sangyeon
    PROCEEDINGS OF THE TECHNOLOGY, MIND, AND SOCIETY CONFERENCE (TECHMINDSOCIETY'18), 2018,
  • [5] A manifesto on explainability for artificial intelligence in medicine
    Combi, Carlo
    Amico, Beatrice
    Bellazzi, Riccardo
    Holzinger, Andreas
    Moore, Jason H.
    Zitnik, Marinka
    Holmes, John H.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2022, 133
  • [6] Causability and explainability of artificial intelligence in medicine
    Holzinger, Andreas
    Langs, Georg
    Denk, Helmut
    Zatloukal, Kurt
    Mueller, Heimo
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2019, 9 (04)
  • [7] Can Requirements Engineering Support Explainable Artificial Intelligence? Towards a User-Centric Approach for Explainability Requirements
    Umm-E-Habiba
    Bogner, Justus
    Wagner, Stefan
    2022 IEEE 30TH INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE WORKSHOPS (REW), 2022, : 162 - 165
  • [8] Requirements Engineering for Collaborative Artificial Intelligence Systems: A Literature Survey
    Odong, Lawrence Araa
    Perini, Anna
    Susi, Angelo
    RESEARCH CHALLENGES IN INFORMATION SCIENCE, 2022, 446 : 409 - 425
  • [9] Explainability, Public Reason, and Medical Artificial Intelligence
    Da Silva, Michael
    ETHICAL THEORY AND MORAL PRACTICE, 2023, 26 (05) : 743 - 762
  • [10] Artificial intelligence explainability: the technical and ethical dimensions
    McDermid, John A.
    Jia, Yan
    Porter, Zoe
    Habli, Ibrahim
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2021, 379 (2207):