Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work

被引:6
|
作者
Theis, Sabine [1 ]
Jentzsch, Sophie [1 ]
Deligiannaki, Fotini [2 ]
Berro, Charles [2 ]
Raulf, Arne Peter [2 ]
Bruder, Carmen [3 ]
机构
[1] Inst Software Technol, D-51147 Cologne, Germany
[2] Inst AI Safety & Secur, Rathausallee 12, D-53757 St Augustin, Germany
[3] Inst Aerosp Med, Sportallee 5a, D-22335 Hamburg, Germany
来源
ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT I | 2023年 / 14050卷
关键词
Artificial intelligence; Explainability; Acceptance; Safety-critical contexts; Air-traffic control; Structured literature analysis; Information needs; User requirement analysis; EXPLANATION; FRAMEWORK; INTERNET; MODELS; HEALTH; NEED; USER; AI;
D O I
10.1007/978-3-031-35891-3_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing prevalence of Artificial Intelligence (AI) in safety-critical contexts such as air-traffic control leads to systems that are practical and efficient, and to some extent explainable to humans to be trusted and accepted. The present structured literature analysis examines n = 236 articles on the requirements for the explainability and acceptance of AI. Results include a comprehensive review of n = 48 articles on information people need to perceive an AI as explainable, the information needed to accept an AI, and representation and interaction methods promoting trust in an AI. Results indicate that the two main groups of users are developers who require information about the internal operations of the model and end users who require information about AI results or behavior. Users' information needs vary in specificity, complexity, and urgency and must consider context, domain knowledge, and the user's cognitive resources. The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations, as well as goal-supporting information tailored to individual preferences and information to establish trust in the system. Information about the system's limitations and potential failures can increase acceptance and trust. Trusted interaction methods are human-like, including natural language, speech, text, and visual representations such as graphs, charts, and animations. Our results have significant implications for future human-centric AI systems being developed. Thus, they are suitable as input for further application-specific investigations of user needs.
引用
收藏
页码:355 / 380
页数:26
相关论文
共 50 条
  • [31] Toward Trustworthy Artificial Intelligence (TAI) in the Context of Explainability and Robustness
    Chander, Bhanu
    John, Chinju
    Warrier, Lekha
    Gopalakrishnan, Kumaravelan
    ACM COMPUTING SURVEYS, 2025, 57 (06)
  • [32] Artificial Intelligence: Impacts of Explainability on Value Creation and Decision Making
    El Oualidi, Taoufik
    RESEARCH CHALLENGES IN INFORMATION SCIENCE, 2022, 446 : 795 - 802
  • [33] Towards explainability in artificial intelligence frameworks for heartcare: A comprehensive survey
    Sreeja, M. U.
    Philip, Abin Oommen
    Supriya, M. H.
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (06)
  • [34] Towards improving the visual explainability of artificial intelligence in the clinical setting
    Adrit Rao
    Oliver Aalami
    BMC Digital Health, 1 (1):
  • [35] Instructive artificial intelligence (AI) for human training, assistance, and explainability
    Kantacka, Nicholas
    Cohena, Nina
    Bosa, Nathan
    Lowmana, Corey
    Everetta, James
    Endres, Timothy
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS IV, 2022, 12113
  • [36] Ethical Considerations of Artificial Intelligence: Ensuring Fairness, Transparency, and Explainability
    Abbu, Haroon
    Mugge, Paul
    Gudergan, Gerhard
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON ENGINEERING, TECHNOLOGY AND INNOVATION (ICE/ITMC) & 31ST INTERNATIONAL ASSOCIATION FOR MANAGEMENT OF TECHNOLOGY, IAMOT JOINT CONFERENCE, 2022,
  • [37] Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence
    Stevens, Alexander F.
    Stetson, Pete
    JOURNAL OF BIOMEDICAL INFORMATICS, 2023, 148
  • [38] A novel framework for artificial intelligence explainability via the Technology Acceptance Model and Rapid Estimate of Adult Literacy in Medicine using machine learning
    Panagoulias, Dimitrios P.
    Virvou, Maria
    Tsihrintzis, George A.
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 248
  • [39] Big data requirements for artificial intelligence
    Wang, Sophia Y.
    Pershing, Suzann
    Lee, Aaron Y.
    CURRENT OPINION IN OPHTHALMOLOGY, 2020, 31 (05) : 318 - 323
  • [40] Pediatrics in Artificial Intelligence Era: A Systematic Review on Challenges, Opportunities, and Explainability
    Balla, Yashaswini
    Tirunagari, Santosh
    Windridge, David
    INDIAN PEDIATRICS, 2023, 60 (07) : 561 - 569