Artificial moral agents: moral mentors or sensible tools?

被引:0
|
作者
Fabio Fossa
机构
[1] Sant’Anna School of Advanced Studies,Institute of Law, Politics and Development
来源
关键词
Machine ethics; Machine morality; Ethics of technology; Artificial moral agents; Moral agency;
D O I
暂无
中图分类号
学科分类号
摘要
The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take into consideration the work of Bostrom and Dietrich, who have radically assumed this viewpoint and thoroughly explored its implications. Thirdly, I present an alternative approach to AMAs—the Discontinuity Approach—which underscores an essential difference between human moral agents and AMAs by tackling the matter from another angle. In this section I concentrate on the work of Johnson and Bryson and I highlight the link between their claims and Heidegger’s and Jonas’s suggestions concerning the relationship between human beings and technological products. In conclusion I argue that, although the Continuity Approach turns out to be a necessary postulate to the machine ethics project, the Discontinuity Approach highlights a relevant distinction between AMAs and human moral agents. On this account, the Discontinuity Approach generates a clearer understanding of what AMAs are, of how we should face the moral issues they pose, and, finally, of the difference that separates machine ethics from moral philosophy.
引用
收藏
页码:115 / 126
页数:11
相关论文
共 50 条
  • [21] Artificial Moral Agents: A Survey of the Current Status
    José-Antonio Cervantes
    Sonia López
    Luis-Felipe Rodríguez
    Salvador Cervantes
    Francisco Cervantes
    Félix Ramos
    Science and Engineering Ethics, 2020, 26 : 501 - 532
  • [22] Un-making artificial moral agents
    Johnson D.G.
    Miller K.W.
    Ethics and Information Technology, 2008, 10 (2-3) : 123 - 133
  • [23] Artificial Moral Agents: A Survey of the Current Status
    Cervantes, Jose-Antonio
    Lopez, Sonia
    Rodriguez, Luis-Felipe
    Cervantes, Salvador
    Cervantes, Francisco
    Ramos, Felix
    SCIENCE AND ENGINEERING ETHICS, 2020, 26 (02) : 501 - 532
  • [24] Ethics and artificial life: From modeling to moral agents
    Sullins J.P.
    Ethics and Information Technology, 2005, 7 (3) : 139 - 148
  • [25] Virtuous vs. utilitarian artificial moral agents
    Bauer, William A.
    AI & SOCIETY, 2020, 35 (01) : 263 - 271
  • [26] Artificial Moral Agents and Their Design Methodology: Retrospect and Prospect
    Gu T.-L.
    Li L.
    Jisuanji Xuebao/Chinese Journal of Computers, 2021, 44 (03): : 632 - 651
  • [27] Artificial Agents in Natural Moral Communities: A Brief Clarification
    Tigard, Daniel W.
    CAMBRIDGE QUARTERLY OF HEALTHCARE ETHICS, 2021, 30 (03) : 455 - 458
  • [28] Virtuous vs. utilitarian artificial moral agents
    William A. Bauer
    AI & SOCIETY, 2020, 35 : 263 - 271
  • [29] Ought we align the values of artificial moral agents?
    Erez Firt
    AI and Ethics, 2024, 4 (2): : 273 - 282
  • [30] Can't Bottom-up Artificial Moral Agents Make Moral Judgements?1
    Boyles, Robert James M.
    FILOSOFIJA-SOCIOLOGIJA, 2024, 35 (01): : 14 - 22