Welfarist Moral Grounding for Transparent AI

被引:0
|
作者
Narayanan, Devesh [1 ]
机构
[1] Natl Univ Singapore, Singapore, Singapore
关键词
Transparency; Welfarism; Moral Theory; AI Ethics;
D O I
10.1145/3593013.3593977
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As popular calls for the transparency of AI systems gain prominence, it is important to think systematically about why transparency matters morally. I'll argue that welfarism provides a theoretical basis for doing so. For welfarists, it is morally desirable to make AI systems transparent insofar as pursuing transparency tends to increase overall welfare, and/or maintaining opacity tends to reduce overall welfare. This might seem like a simple - even simplistic - move. However, as I will show, the process of tracing the expected effects of transparency on welfare can bring much-needed clarity to existing debates about when AI systems should and should not be transparent. Welfarism provides us with a basis to evaluate conflicting desiderata, and helps us avoid a problematic tendency to reify trust, accountability, and other such goals as ends in themselves. And, by shifting the focus away from the mere act of making an AI system transparent, towards the harms and benefits that its transparency might bring about, welfarists call attention to often-neglected social, legal, and institutional factors that determine whether relevant stakeholders are able to access and meaningfully act on the information made transparent to produce desirable consequences. In these ways, welfarism helps us understand AI transparency not merely as a demand to look at the innards of some technical system, but rather as a broader moral ideal about how we should relate to powerful technologies that make decisions about us.
引用
收藏
页码:64 / 76
页数:13
相关论文
共 50 条
  • [31] Transparent, explainable, and accountable AI for robotics
    Wachter, Sandra
    Mittelstadt, Brent
    Floridi, Luciano
    SCIENCE ROBOTICS, 2017, 2 (06)
  • [32] Moral control and ownership in AI systems
    Gonzalez Fabre, Raul
    Camacho Ibanez, Javier
    Tejedor Escobar, Pedro
    AI & SOCIETY, 2021, 36 (01) : 289 - 303
  • [33] DELPHI: AI & HUMAN MORAL JUDGEMENT
    不详
    ANTHROPOLOGY TODAY, 2025, 41 (01)
  • [34] Moral consideration for AI systems by 2030
    Jeff Sebo
    Robert Long
    AI and Ethics, 2025, 5 (1): : 591 - 606
  • [35] The Ethics of AI and The Moral Responsibility of Philosophers
    Boddington, Paula
    TPM-THE PHILOSOPHERS MAGAZINE, 2020, (89): : 62 - 68
  • [36] Moral distance, AI, and the ethics of care
    Villegas-Galaviz, Carolina
    Martin, Kirsten
    AI & SOCIETY, 2024, 39 (04) : 1695 - 1706
  • [37] Emergent Models for Moral AI Spirituality
    Graves, Mark
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2021, 7 (01): : 7 - 15
  • [38] Moral Relevance Approach for AI Ethics
    Fang, Shuaishuai
    PHILOSOPHIES, 2024, 9 (02)
  • [39] Moral AI and How We Get There
    Kishore, Jyoti
    JOURNAL OF HUMAN VALUES, 2025,
  • [40] RIGHTS IN MORAL LIVES - MELDEN,AI
    CHILD, JW
    PHILOSOPHICAL QUARTERLY, 1990, 40 (158): : 112 - 116