Explainable machine learning practices: opening another black box for reliable medical AI

被引:0
|
作者
Emanuele Ratti
Mark Graves
机构
[1] Johannes Kepler University Linz,Institute of PHilosophy and Scientific Method
[2] Parexel AI Labs,Department of Humanities and Arts
[3] Technion Israel Institute of Technology,undefined
来源
AI and Ethics | 2022年 / 2卷 / 4期
关键词
Black box; Machine learning; Medical AI; Reliable AI; Values; Trustworthiness;
D O I
10.1007/s43681-022-00141-z
中图分类号
学科分类号
摘要
In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like.
引用
收藏
页码:801 / 814
页数:13
相关论文
共 50 条
  • [31] Opening the Black Box: Bootstrapping Sensitivity Measures in Neural Networks for Interpretable Machine Learning
    La Rocca, Michele
    Perna, Cira
    STATS, 2022, 5 (02): : 440 - 457
  • [32] Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
    Baselli, Giuseppe
    Codari, Marina
    Sardanelli, Francesco
    EUROPEAN RADIOLOGY EXPERIMENTAL, 2020, 4 (01)
  • [33] Opening the black box: explainable deep-learning classification of wood microscopic image of endangered tree species
    Zheng, Chang
    Liu, Shoujia
    Wang, Jiajun
    Lu, Yang
    Ma, Lingyu
    Jiao, Lichao
    Guo, Juan
    Yin, Yafang
    He, Tuo
    PLANT METHODS, 2024, 20 (01)
  • [34] Opening the Black Box: A systematic review on explainable artificial intelligence in remote sensing
    Hoehl, Adrian
    Obadic, Ivica
    Fernandez-Torres, Miguel-Angel
    Najjar, Hiba
    Oliveira, Dario Augusto Borges
    Akata, Zeynep
    Dengel, Andreas
    Zhu, Xiao Xiang
    IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, 2024, 12 (04) : 261 - 304
  • [35] Crop yield prediction via explainable AI and interpretable machine learning: Dangers of black box models for evaluating climate change impacts on crop yield
    Hu, Tongxi
    Zhang, Xuesong
    Bohrer, Gil
    Liu, Yanlan
    Zhou, Yuyu
    Martin, Jay
    Li, Yang
    Zhao, Kaiguang
    AGRICULTURAL AND FOREST METEOROLOGY, 2023, 336
  • [36] Shedding Light on the Black Box: Explainable AI for Predicting Household Appliance Failures
    Falatouri, Taha
    Nasseri, Mehran
    Brandtner, Patrick
    Darbanian, Farzaneh
    HCI INTERNATIONAL 2023 LATE BREAKING PAPERS, HCII 2023, PT VI, 2023, 14059 : 69 - 83
  • [37] Learning Classifier Systems: Cognitive inspired Machine Learning for eXplainable AI
    Siddique, Abubakar
    Browne, Will
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 1081 - 1110
  • [38] Explainable AI via Linguistic Summarization of Black Box Computer Vision Models
    Alvey, Brendan J.
    Anderson, Derek T.
    Keller, James M.
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 357 - 358
  • [39] Peeking inside the black-box: Explainable machine learning applied to household transportation energy consumption
    Shams Amiri, Shideh
    Mottahedi, Sam
    Lee, Earl Rusty
    Hoque, Simi
    Computers, Environment and Urban Systems, 2021, 88
  • [40] Peeking inside the black-box: Explainable machine learning applied to household transportation energy consumption
    Amiri, Shideh Shams
    Mottahedi, Sam
    Lee, Earl Rusty
    Hoque, Simi
    COMPUTERS ENVIRONMENT AND URBAN SYSTEMS, 2021, 88