Explainable machine learning practices: opening another black box for reliable medical AI

被引:0
|
作者
Emanuele Ratti
Mark Graves
机构
[1] Johannes Kepler University Linz,Institute of PHilosophy and Scientific Method
[2] Parexel AI Labs,Department of Humanities and Arts
[3] Technion Israel Institute of Technology,undefined
来源
AI and Ethics | 2022年 / 2卷 / 4期
关键词
Black box; Machine learning; Medical AI; Reliable AI; Values; Trustworthiness;
D O I
10.1007/s43681-022-00141-z
中图分类号
学科分类号
摘要
In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like.
引用
收藏
页码:801 / 814
页数:13
相关论文
共 50 条
  • [41] What’s wrong with medical black box AI?
    Bert Gordijn
    Henk ten Have
    Medicine, Health Care and Philosophy, 2023, 26 : 283 - 284
  • [42] What's wrong with medical black box AI?
    Gordijn, Bert
    ten Have, Henk
    MEDICINE HEALTH CARE AND PHILOSOPHY, 2023, 26 (03) : 283 - 284
  • [43] eXplainable and Reliable Against Adversarial Machine Learning in Data Analytics
    Vaccari, Ivan
    Carlevaro, Alberto
    Narteni, Sara
    Cambiaso, Enrico
    Mongelli, Maurizio
    IEEE ACCESS, 2022, 10 : 83949 - 83970
  • [44] Diabetes prediction using machine learning and explainable AI techniques
    Tasin, Isfafuzzaman
    Nabil, Tansin Ullah
    Islam, Sanjida
    Khan, Riasat
    HEALTHCARE TECHNOLOGY LETTERS, 2023, 10 (1-2) : 1 - 10
  • [45] Predicting life satisfaction using machine learning and explainable AI
    Khan, Alif Elham
    Hasan, Mohammad Junayed
    Anjum, Humayra
    Mohammed, Nabeel
    Momen, Sifat
    HELIYON, 2024, 10 (10)
  • [46] Exploring Explainable AI Techniques for Radio Frequency Machine Learning
    Adams, Stephen
    Taylor, Mia
    Crofford, Cody
    Harper, Scott
    Batchelor, Whitney
    Headley, William C.
    2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024, 2024, : 543 - 549
  • [47] Amalgamation of Transfer Learning and Explainable AI for Internet of Medical Things
    Murugan R.
    Paliwal M.
    Patibandla R.S.M.L.
    Shah P.
    Balaga T.R.
    Gurrammagari D.R.
    Singaravelu P.
    Yenduri G.
    Jhaveri R.
    Recent Advances in Computer Science and Communications, 2024, 17 (04) : 40 - 53
  • [48] OPENING THE BLACK BOX OF TEACHER LEARNING: SHIFTS IN ATTENTION
    Goldsmith, Lynn T.
    Doerr, Helen M.
    Lewis, Catherine
    PME 33: PROCEEDINGS OF THE 33RD CONFERENCE OF THE INTERNATIONAL GROUP FOR THE PSYCHOLOGY OF MATHEMATICS EDUCATION, VOL 3, 2009, 3 : 97 - 104
  • [49] In-Training Explainability Frameworks: A Method to Make Black-Box Machine Learning Models More Explainable
    Acun, Cagla
    Nasraoui, Olfa
    2023 IEEE INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2023, : 230 - 237
  • [50] Machine learning for an explainable cost prediction of medical insurance
    Orji, Ugochukwu
    Ukwandu, Elochukwu
    MACHINE LEARNING WITH APPLICATIONS, 2024, 15