Artificial moral agents: moral mentors or sensible tools?

被引:0
|
作者
Fabio Fossa
机构
[1] Sant’Anna School of Advanced Studies,Institute of Law, Politics and Development
来源
关键词
Machine ethics; Machine morality; Ethics of technology; Artificial moral agents; Moral agency;
D O I
暂无
中图分类号
学科分类号
摘要
The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take into consideration the work of Bostrom and Dietrich, who have radically assumed this viewpoint and thoroughly explored its implications. Thirdly, I present an alternative approach to AMAs—the Discontinuity Approach—which underscores an essential difference between human moral agents and AMAs by tackling the matter from another angle. In this section I concentrate on the work of Johnson and Bryson and I highlight the link between their claims and Heidegger’s and Jonas’s suggestions concerning the relationship between human beings and technological products. In conclusion I argue that, although the Continuity Approach turns out to be a necessary postulate to the machine ethics project, the Discontinuity Approach highlights a relevant distinction between AMAs and human moral agents. On this account, the Discontinuity Approach generates a clearer understanding of what AMAs are, of how we should face the moral issues they pose, and, finally, of the difference that separates machine ethics from moral philosophy.
引用
收藏
页码:115 / 126
页数:11
相关论文
共 50 条
  • [31] Moral Typecasting: Divergent Perceptions of Moral Agents and Moral Patients
    Gray, Kurt
    Wegner, Daniel A.
    JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 2009, 96 (03) : 505 - 520
  • [32] MORAL AGENTS
    STEINER, H
    MIND, 1973, 82 (326) : 263 - 265
  • [33] SOCRATES AND CONFUCIUS - MORAL AGENTS OR MORAL PHILOSOPHERS
    MAHOOD, GH
    PHILOSOPHY EAST & WEST, 1971, 21 (02): : 177 - 188
  • [34] Computer systems: Moral entities but not moral agents
    Deborah G. Johnson
    Ethics and Information Technology, 2006, 8 (4) : 195 - 204
  • [35] Moral zombies: why algorithms are not moral agents
    Veliz, Carissa
    AI & SOCIETY, 2021, 36 (02) : 487 - 497
  • [36] Moral zombies: why algorithms are not moral agents
    Carissa Véliz
    AI & SOCIETY, 2021, 36 : 487 - 497
  • [37] The Aristotelian Robot: Towards a Moral Phenomenology of Artificial Social Agents
    Mendieta, Eduardo
    Wagner, Alan R.
    PHILOSOPHY TODAY, 2024, 68 (02) : 327 - 340
  • [38] Correction: Ought we align the values of artificial moral agents?
    Erez Firt
    AI and Ethics, 2024, 4 (2): : 283 - 283
  • [39] Attributions toward artificial agents in a modified Moral Turing Test
    Aharoni, Eyal
    Fernandes, Sharlene
    Brady, Daniel J.
    Alexander, Caelan
    Criner, Michael
    Queen, Kara
    Rando, Javier
    Nahmias, Eddy
    Crespo, Victor
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [40] Introduction to Moral Induction Model and its Deployment in Artificial Agents
    Hromada, Daniel Devatman
    Gaudiello, Ilaria
    SOCIABLE ROBOTS AND THE FUTURE OF SOCIAL RELATIONS, 2014, 273 : 209 - 216