Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work

被引:6
|
作者
Theis, Sabine [1 ]
Jentzsch, Sophie [1 ]
Deligiannaki, Fotini [2 ]
Berro, Charles [2 ]
Raulf, Arne Peter [2 ]
Bruder, Carmen [3 ]
机构
[1] Inst Software Technol, D-51147 Cologne, Germany
[2] Inst AI Safety & Secur, Rathausallee 12, D-53757 St Augustin, Germany
[3] Inst Aerosp Med, Sportallee 5a, D-22335 Hamburg, Germany
来源
ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT I | 2023年 / 14050卷
关键词
Artificial intelligence; Explainability; Acceptance; Safety-critical contexts; Air-traffic control; Structured literature analysis; Information needs; User requirement analysis; EXPLANATION; FRAMEWORK; INTERNET; MODELS; HEALTH; NEED; USER; AI;
D O I
10.1007/978-3-031-35891-3_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing prevalence of Artificial Intelligence (AI) in safety-critical contexts such as air-traffic control leads to systems that are practical and efficient, and to some extent explainable to humans to be trusted and accepted. The present structured literature analysis examines n = 236 articles on the requirements for the explainability and acceptance of AI. Results include a comprehensive review of n = 48 articles on information people need to perceive an AI as explainable, the information needed to accept an AI, and representation and interaction methods promoting trust in an AI. Results indicate that the two main groups of users are developers who require information about the internal operations of the model and end users who require information about AI results or behavior. Users' information needs vary in specificity, complexity, and urgency and must consider context, domain knowledge, and the user's cognitive resources. The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations, as well as goal-supporting information tailored to individual preferences and information to establish trust in the system. Information about the system's limitations and potential failures can increase acceptance and trust. Trusted interaction methods are human-like, including natural language, speech, text, and visual representations such as graphs, charts, and animations. Our results have significant implications for future human-centric AI systems being developed. Thus, they are suitable as input for further application-specific investigations of user needs.
引用
收藏
页码:355 / 380
页数:26
相关论文
共 50 条
  • [21] Explainability for All: Care Ethics for Implementing Artificial Intelligence
    Tracey, Olivia
    Irish, Robert
    2023 IEEE INTERNATIONAL SYMPOSIUM ON TECHNOLOGY AND SOCIETY, ISTAS, 2023,
  • [22] Trustworthiness of Artificial Intelligence Models in Radiology and the Role of Explainability
    Kitamura, Felipe C.
    Marques, Oge
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2021, 18 (08) : 1160 - 1162
  • [23] Notions of explainability and evaluation approaches for explainable artificial intelligence
    Vilone, Giulia
    Longo, Luca
    INFORMATION FUSION, 2021, 76 : 89 - 106
  • [24] Artificial intelligence in pharmacovigilance: Do we need explainability?
    Hauben, Manfred
    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, 2022, 31 (12) : 1311 - 1316
  • [25] Analyzing Trustworthiness and Explainability in Artificial Intelligence: A Comprehensive Review
    Dixit, Muskan
    Kansal, Isha
    Khullar, Vikas
    Kumar, Rajeev
    Kumar, Sunil
    RECENT ADVANCES IN ELECTRICAL & ELECTRONIC ENGINEERING, 2024,
  • [26] Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability
    Mark Coeckelbergh
    Science and Engineering Ethics, 2020, 26 : 2051 - 2068
  • [27] A Collaborative Control Protocol with Artificial Intelligence for Medical Student Work Scheduling
    Dusadeerungsikul, P. O.
    Nof, Shimon Y.
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2024, 19 (04)
  • [28] Artificial Intelligence in Collaborative Computing
    Wang, Xinheng
    Gao, Honghao
    Huang, Kaizhu
    MOBILE NETWORKS & APPLICATIONS, 2021, 26 (06): : 2389 - 2391
  • [29] Artificial Intelligence in Collaborative Computing
    Xinheng Wang
    Honghao Gao
    Kaizhu Huang
    Mobile Networks and Applications, 2021, 26 : 2389 - 2391
  • [30] Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey
    Ding, Weiping
    Abdel-Basset, Mohamed
    Hawash, Hossam
    Ali, Ahmed M.
    INFORMATION SCIENCES, 2022, 615 : 238 - 292