The Next Frontier: AI We Can Really Trust

被引:95
作者
Holzinger, Andreas [1 ,2 ]
机构
[1] Med Univ Graz, Human Ctr AI Lab, Graz, Austria
[2] Alberta Machine Intelligence Inst, xAI Lab, Edmonton, AB, Canada
来源
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I | 2021年 / 1524卷
基金
奥地利科学基金会;
关键词
Artificial intelligence; Trust; Explainable AI; Robustness; Human-in-the-loop;
D O I
10.1007/978-3-030-93736-2_33
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Enormous advances in the domain of statistical machine learning, the availability of large amounts of training data, and increasing computing power have made Artificial Intelligence (AI) very successful. For certain tasks, algorithms can even achieve performance beyond the human level. Unfortunately, the most powerful methods suffer from the fact that it is difficult to explain why a certain result was achieved on the one hand, and that they lack robustness on the other. Our most powerful machine learning models are very sensitive to even small changes. Perturbations in the input data can have a dramatic impact on the output and lead to entirely different results. This is of great importance in virtually all critical domains where we suffer from low data quality, i.e. we do not have the expected i.i.d. data. Therefore, the use of AI in domains that impact human life (agriculture, climate, health, ...) has led to an increased demand for trustworthy AI. Explainability is now even mandatory due to regulatory requirements in sensitive domains such as medicine, which requires traceability, transparency and interpretability capabilities. One possible step to make AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it can be advantageous to use a human in the loop. A human expert can - sometimes, of course not always - bring experience, domain knowledge and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal point of view, but in many application areas the "why" is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence.
引用
收藏
页码:427 / 440
页数:14
相关论文
共 54 条
[1]  
Bareinboim E., 2013, ARXIV13127485
[2]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[3]   Deep Learning for AI [J].
Bengio, Yoshua ;
Lecun, Yann ;
Hinton, Geoffrey .
COMMUNICATIONS OF THE ACM, 2021, 64 (07) :58-65
[4]  
Biecek PL, 2018, J MACH LEARN RES, V19
[5]  
Binet A., 1903, ETUDE EXPERIMENTALE
[6]   Bridging the "last mile" gap between AI implementation and operation: "data awareness" that matters [J].
Cabitza, Federico ;
Campagner, Andrea ;
Balsano, Clara .
ANNALS OF TRANSLATIONAL MEDICINE, 2020, 8 (07)
[7]  
Chatila R., 2021, Reflections on artificial intelligence for humanity, V2021, P13, DOI DOI 10.1007/978-3-030-69128-82/COVER
[8]   TRUST AS A COMPLEX MULTIDIMENSIONAL CONSTRUCT [J].
CORAZZINI, JG .
PSYCHOLOGICAL REPORTS, 1977, 40 (01) :75-80
[9]   Towards personalization of diabetes therapy using computerized decision support and machine learning: Some open problems and challenges [J].
Donsa, Klaus ;
Spat, Stephan ;
Beck, Peter ;
Pieber, Thomas R. ;
Holzinger, Andreas .
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015, 8700 :237-260
[10]  
Elsayed G.F., 2018, NEURAL INFORM PROCES, P1