Explainable AI: introducing trust and comprehensibility to AI engineering

被引:0
|
作者
Burkart, Nadia [1 ]
Brajovic, Danilo [2 ]
Huber, Marco F. [2 ]
机构
[1] Fraunhofer Inst Optron Syst Technol & Image Explo, Karlsruhe, Germany
[2] Fraunhofer Inst Mfg Engn & Automat IPA, Dept Cyber Cognit Intelligence CCI, Stuttgart, Germany
关键词
explainable AI; machine learning; model refinement; data set refinement;
D O I
10.1515/auto-2022-0013
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) rapidly gains increasing interest due to the continuous improvements in performance. ML is used in many different applications to support human users. The representational power of ML models allows solving difficult tasks, while making them impossible to be understood by humans. This provides room for possible errors and limits the full potential of ML, as it cannot be applied in critical environments. In this paper, we propose employing Explainable AI (xAI) for both model and data set refinement, in order to introduce trust and comprehensibility. Model refinement utilizes xAI for providing insights to inner workings of an ML model, for identifying limitations and for deriving potential improvements. Similarly, xAI is used in data set refinement to detect and resolve problems of the training data.
引用
收藏
页码:787 / 792
页数:6
相关论文
共 50 条
  • [41] Statutory Professions in AI Governance and Their Consequences for Explainable AI
    NiFhaolain, Labhaoise
    Hines, Andrew
    Nallur, Vivek
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 85 - 96
  • [42] Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven Decision Support using Evaluative AI
    Miller, Tim
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 333 - 342
  • [43] Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance
    Hoffman, Robert R.
    Mueller, Shane T.
    Klein, Gary
    Litman, Jordan
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [44] Explainable AI and trust: How news media shapes public support for AI-powered autonomous passenger drones
    Cheung, Justin C.
    Ho, Shirley S.
    PUBLIC UNDERSTANDING OF SCIENCE, 2024,
  • [45] Explainable AI in Learning Analytics: Improving Predictive Models and Advancing Transparency Trust
    Liu, Qinyi
    Khalil, Mohammad
    2024 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE, EDUCON 2024, 2024,
  • [46] Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma
    Chanda, Tirtha
    Hauser, Katja
    Hobelsberger, Sarah
    Bucher, Tabea-Clara
    Garcia, Carina Nogueira
    Wies, Christoph
    Kittler, Harald
    Tschandl, Philipp
    Navarrete-Dechent, Cristian
    Podlipnik, Sebastian
    Chousakos, Emmanouil
    Crnaric, Iva
    Majstorovic, Jovana
    Alhajwan, Linda
    Foreman, Tanya
    Peternel, Sandra
    Sarap, Sergei
    Oezdemir, Irem
    Barnhill, Raymond L.
    Llamas-Velasco, Mar
    Poch, Gabriela
    Korsing, Soeren
    Sondermann, Wiebke
    Gellrich, Frank Friedrich
    Heppt, Markus V.
    Erdmann, Michael
    Haferkamp, Sebastian
    Drexler, Konstantin
    Goebeler, Matthias
    Schilling, Bastian
    Utikal, Jochen S.
    Ghoreschi, Kamran
    Froehling, Stefan
    Krieghoff-Henning, Eva
    Salava, Alexander
    Thiem, Alexander
    Dimitrios, Alexandris
    Ammar, Amr Mohammad
    Vucemilovic, Ana Sanader
    Yoshimura, Andrea Miyuki
    Ilieva, Andzelka
    Gesierich, Anja
    Reimer-Taschenbrecker, Antonia
    Kolios, Antonios G. A.
    Kalva, Arturs
    Ferhatosmanoglu, Arzu
    Beyens, Aude
    Pfoehler, Claudia
    Erdil, Dilara Ilhan
    Jovanovic, Dobrila
    NATURE COMMUNICATIONS, 2024, 15 (01)
  • [47] The role of user feedback in enhancing understanding and trust in counterfactual explanations for explainable AI
    Suffian, Muhammad
    Kuhl, Ulrike
    Bogliolo, Alessandro
    Alonso-Moral, Jose M.
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2025, 199
  • [48] Explainable AI for Chiller Fault-Detection Systems: Gaining Human Trust
    Srinivasan, Seshadhri
    Arjunan, Pandarasamy
    Jin, Baihong
    Sangiovanni-Vincentelli, Alberto
    Sultan, Zuraimi
    Poolla, Kameshwar
    COMPUTER, 2021, 54 (10) : 60 - 68
  • [49] Introducing the future of AI
    Hendler, J
    IEEE INTELLIGENT SYSTEMS, 2006, 21 (03) : 2 - 4
  • [50] Responsible, Explainable, and Emotional AI
    Andriole, Stephen J. J.
    Abolfazli, Saeid
    Feidakis, Michalis
    IT PROFESSIONAL, 2022, 24 (05) : 16 - 17