Trust in medical artificial intelligence: a discretionary account

被引:0
|
作者
Philip J. Nickel
机构
[1] Eindhoven University of Technology,Department of Philosophy and Ethics, School of Innovation Sciences
来源
关键词
Artificial intelligence; Trust in AI; Discretion; Normative expectations; Future of medicine;
D O I
暂无
中图分类号
学科分类号
摘要
This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI practitioners through the vehicle of an AI application. I conclude with four critical questions based on the discretionary account to determine if trust in particular AI applications is sound, and a brief discussion of the possibility that the main roles of the physician could be replaced by AI.
引用
收藏
相关论文
共 50 条
  • [1] Trust in medical artificial intelligence: a discretionary account
    Nickel, Philip J.
    ETHICS AND INFORMATION TECHNOLOGY, 2022, 24 (01)
  • [2] Trust in artificial intelligence for medical diagnoses
    Juravle, Georgiana
    Boudouraki, Andriana
    Terziyska, Miglena
    Rezlescu, Constantin
    REAL-WORLD APPLICATIONS IN COGNITIVE NEUROSCIENCE, 2020, 253 : 263 - 282
  • [3] Intentional machines: A defence of trust in medical artificial intelligence
    Starke, Georg
    van den Brule, Rik
    Elger, Bernice Simone
    Haselager, Pim
    BIOETHICS, 2022, 36 (02) : 154 - 161
  • [4] Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence
    Starke, Georg
    Ienca, Marcello
    CAMBRIDGE QUARTERLY OF HEALTHCARE ETHICS, 2022,
  • [5] Trust in Artificial Intelligence
    Sethumadhavan, Arathi
    ERGONOMICS IN DESIGN, 2019, 27 (02) : 34 - 34
  • [6] Proposal for Type Classification for Building Trust in Medical Artificial Intelligence Systems
    Ema, Arisa
    Nagakura, Katsue
    Fujita, Takanori
    PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, : 251 - 257
  • [7] Holding artificial intelligence to account
    不详
    LANCET DIGITAL HEALTH, 2022, 4 (05): : E290 - E290
  • [8] Trust in Artificial Intelligence: Exploring the Influence of Model Presentation and Model Interaction on Trust in a Medical Setting
    Wunn, Tina
    Sent, Danielle
    Peute, Linda W. P.
    Leijnen, Stefan
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 2, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1948 : 76 - 86
  • [9] Relationship Between Trust in the Artificial Intelligence Creator and Trust in Artificial Intelligence Systems: The Crucial Role of Artificial Intelligence Alignment and Steerability
    Saffarizadeh, Kambiz
    Keil, Mark
    Maruping, Likoebe
    JOURNAL OF MANAGEMENT INFORMATION SYSTEMS, 2024, 41 (03) : 645 - 681
  • [10] Attachment and trust in artificial intelligence
    Gillath, Omri
    Ai, Ting
    Branicky, Michael S.
    Keshmiri, Shawn
    Davison, Robert B.
    Spaulding, Ryan
    COMPUTERS IN HUMAN BEHAVIOR, 2021, 115