Moral control and ownership in AI systems

被引:0
|
作者
Raul Gonzalez Fabre
Javier Camacho Ibáñez
Pedro Tejedor Escobar
机构
[1] Universidad Pontificia Comillas,
[2] Instituto de Ingeniería del Conocimiento,undefined
来源
AI & SOCIETY | 2021年 / 36卷
关键词
Artificial Intelligence; Moral agency; Data bias; Machine learning; Autonomous systems; Decision support;
D O I
暂无
中图分类号
学科分类号
摘要
AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the public and private organisations. The purpose of this article is to offer a mapping of the technological architectures that support AIS, under the specific focus of the moral agency. Through a literature research and reflection process, the following areas are covered: a brief introduction and review of the literature on the topic of moral agency; an analysis using the BDI logic model (Bratman 1987); an elemental review of artificial ‘reasoning’ architectures in AIS; the influence of the data input and the data quality; AI systems’ positioning in decision support and decision making scenarios; and finally, some conclusions are offered about regarding the potential loss of moral control by humans due to AIS. This article contributes to the field of Ethics and Artificial Intelligence by providing a discussion for developers and researchers to understand how and under what circumstances the ‘human subject’ may, totally or partially, lose moral control and ownership over AI technologies. The topic is relevant because AIS often are not single machines but complex networks of machines that feed information and decisions into each other and to human operators. The detailed traceability of input-process-output at each node of the network is essential for it to remain within the field of moral agency. Moral agency is then at the basis of our system of legal responsibility, and social approval is unlikely to be obtained for entrusting important functions to complex systems under which no moral agency can be identified.
引用
收藏
页码:289 / 303
页数:14
相关论文
共 50 条
  • [41] Moral Relevance Approach for AI Ethics
    Fang, Shuaishuai
    PHILOSOPHIES, 2024, 9 (02)
  • [42] Moral AI and How We Get There
    Kishore, Jyoti
    JOURNAL OF HUMAN VALUES, 2025,
  • [43] RIGHTS IN MORAL LIVES - MELDEN,AI
    CHILD, JW
    PHILOSOPHICAL QUARTERLY, 1990, 40 (158): : 112 - 116
  • [44] Welfarist Moral Grounding for Transparent AI
    Narayanan, Devesh
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 64 - 76
  • [45] AI-Extended Moral Agency?
    Telakivi, Pii
    Kokkonen, Tomi
    Hakli, Raul
    Makela, Pekka
    SOCIAL EPISTEMOLOGY, 2025,
  • [46] Secure Ownership and Ownership Transfer in RFID Systems
    van Deursen, Ton
    Mauw, Sjouke
    Radomirovic, Sasa
    Vullers, Pim
    COMPUTER SECURITY - ESORICS 2009, PROCEEDINGS, 2009, 5789 : 637 - 654
  • [47] Moral Judgments of Human vs. AI Agents in Moral Dilemmas
    Zhang, Yuyan
    Wu, Jiahua
    Yu, Feng
    Xu, Liying
    BEHAVIORAL SCIENCES, 2023, 13 (02)
  • [48] Not in my AI: Moral engagement and disengagement in health care AI development
    Nichol, Ariadne A.
    Halley, Meghan C.
    Federico, Carole A.
    Cho, Mildred K.
    Sankar, Pamela L.
    BIOCOMPUTING 2023, PSB 2023, 2023, : 496 - 506
  • [49] More to Know Could Not be More to Trust: Open Communication as a Moral Imperative for AI Systems in Healthcare
    Starke, Georg
    AMERICAN JOURNAL OF BIOETHICS, 2025, 25 (03): : 119 - 121
  • [50] For the greater goods? Ownership rights and utilitarian moral judgment
    Millar, J. Charles
    Turri, John
    Friedman, Ori
    COGNITION, 2014, 133 (01) : 79 - 84