Trustworthy Artificial Intelligence: A Review

被引:227
|
作者
Kaur, Davinder [1 ]
Uslu, Suleyman [1 ]
Rittichier, Kaley J. [1 ]
Durresi, Arjan [1 ]
机构
[1] Indiana Univ Purdue Univ, Comp & Informat Sci, 723 W Michigan St, Indianapolis, IN 46202 USA
基金
美国食品与农业研究所; 美国国家科学基金会;
关键词
Artificial intelligence; machine learning; black-box problem; trustworthy AI; explainable AI; fairness; explainability; accountability; privacy; acceptance; BIG DATA; ALGORITHM; ACCEPTANCE; FRAMEWORK; ANONYMITY; FAIRNESS; SYSTEMS; ETHICS; TRUST; AL;
D O I
10.1145/3491209
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Artificial intelligence (AI) and algorithmic decision making are having a profound impact on our daily lives. These systems are vastly used in different high-stakes applications like healthcare, business, government, education, and justice, moving us toward a more algorithmic society. However, despite so many advantages of these systems, they sometimes directly or indirectly cause harm to the users and society. Therefore, it has become essential to make these systems safe, reliable, and trustworthy. Several requirements, such as fairness, explainability, accountability, reliability, and acceptance, have been proposed in this direction to make these systems trustworthy. This survey analyzes all of these different requirements through the lens of the literature. It provides an overview of different approaches that can help mitigate AI risks and increase trust and acceptance of the systems by utilizing the users and society. It also discusses existing strategies for validating and verifying these systems and the current standardization efforts for trustworthy AI. Finally, we present a holistic view of the recent advancements in trustworthy AI to help the interested researchers grasp the crucial facets of the topic efficiently and offer possible future research directions.
引用
收藏
页数:38
相关论文
共 50 条
  • [41] An Explainable Artificial Intelligence Approach for a Trustworthy Spam Detection
    Ibrahim, Abubakr
    Mejri, Mohamed
    Jaafar, Fehmi
    2023 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2023, : 160 - 167
  • [42] A Trustworthy View on Explainable Artificial Intelligence Method Evaluation
    Li, Ding
    Liu, Yan
    Huang, Jun
    Wang, Zerui
    COMPUTER, 2023, 56 (04) : 50 - 60
  • [43] Trustworthy Artificial Intelligence Requirements in the Autonomous Driving Domain
    Fernandez-Llorca, David
    Gomez, Emilia
    COMPUTER, 2023, 56 (02) : 29 - 39
  • [44] Introduction to the Trustworthy Artificial Intelligence and Machine Learning Minitrack
    Pouchard, Line
    Salholer, Peter
    Proceedings of the Annual Hawaii International Conference on System Sciences, 2024,
  • [45] Supporting Trustworthy Artificial Intelligence via Bayesian Argumentation
    Cerutti, Federico
    AIXIA 2021 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13196 : 377 - 388
  • [46] Exploring the landscape of trustworthy artificial intelligence: Status and challenges
    Mentzas, Gregoris
    Fikardos, Mattheos
    Lepenioti, Katerina
    Apostolou, Dimitris
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2024, 18 (02): : 837 - 854
  • [48] A critical perspective on guidelines for responsible and trustworthy artificial intelligence
    Buruk, Banu
    Ekmekci, Perihan Elif
    Arda, Berna
    MEDICINE HEALTH CARE AND PHILOSOPHY, 2020, 23 (03) : 387 - 399
  • [49] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
    Ali, Sajid
    Abuhmed, Tamer
    El-Sappagh, Shaker
    Muhammad, Khan
    Alonso-Moral, Jose M.
    Confalonieri, Roberto
    Guidotti, Riccardo
    Del Ser, Javier
    Diaz-Rodriguez, Natalia
    Herrera, Francisco
    INFORMATION FUSION, 2023, 99
  • [50] Involving patients in artificial intelligence research to build trustworthy systems
    Banerjee, Soumya
    Griffiths, Sarah
    AI & SOCIETY, 2024, 39 (06) : 3041 - 3042