Find the Gap: AI, Responsible Agency and Vulnerability

被引:3
|
作者
Vallor, Shannon [1 ,2 ]
Vierkant, Tillmann [1 ]
机构
[1] Univ Edinburgh, Sch Philosophy Psychol & Language Sci, Edinburgh, Scotland
[2] Univ Edinburgh, Edinburgh Futures Inst, Edinburgh, Scotland
基金
英国科研创新办公室;
关键词
Reactive attitudes; Agency cultivation; Moral responsibility; Autonomous systems; Problem of many hands; Vulnerability gap; MORAL RESPONSIBILITY; ATTRIBUTABILITY; ACCOUNTABILITY; ANSWERABILITY;
D O I
10.1007/s11023-024-09674-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless is a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of vulnerability between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] Responsible AI: requirements and challenges
    Malik Ghallab
    AI Perspectives, 1 (1):
  • [32] Responsible AI: An Urgent Mandate
    Baeza-Yates, Ricardo
    Fayyad, Usama M.
    IEEE INTELLIGENT SYSTEMS, 2024, 39 (01) : 12 - 17
  • [33] Responsible, Explainable, and Emotional AI
    Andriole, Stephen J. J.
    Abolfazli, Saeid
    Feidakis, Michalis
    IT PROFESSIONAL, 2022, 24 (05) : 16 - 17
  • [34] How to be responsible in AI publication
    Nature Machine Intelligence, 2021, 3 : 367 - 367
  • [35] Responsible AI and action learning
    Johnson, Craig
    Mohamed, Emad
    ACTION LEARNING, 2025, 22 (01): : 55 - 67
  • [36] The ethical agency of AI developers
    Tricia A. Griffin
    Brian Patrick Green
    Jos V. M. Welie
    AI and Ethics, 2024, 4 (2): : 179 - 188
  • [37] Reconsidering Agency in the Age of AI
    Schreiber, Gerhard
    FILOZOFIA, 2024, 79 (05): : 529 - 537
  • [38] Ethics and responsible AI deployment
    Radanliev, Petar
    Santos, Omar
    Brandon-Jones, Alistair
    Joinson, Adam
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [39] How to be responsible in AI publication
    不详
    NATURE MACHINE INTELLIGENCE, 2021, 3 (05) : 367 - 367
  • [40] AI for chemistry teaching: responsible AI and ethical considerations
    Blonder, Ron
    Feldman-Maggor, Yael
    CHEMISTRY TEACHER INTERNATIONAL, 2024, 6 (04) : 385 - 395