Moral control and ownership in AI systems

被引:0
|
作者
Raul Gonzalez Fabre
Javier Camacho Ibáñez
Pedro Tejedor Escobar
机构
[1] Universidad Pontificia Comillas,
[2] Instituto de Ingeniería del Conocimiento,undefined
来源
AI & SOCIETY | 2021年 / 36卷
关键词
Artificial Intelligence; Moral agency; Data bias; Machine learning; Autonomous systems; Decision support;
D O I
暂无
中图分类号
学科分类号
摘要
AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the public and private organisations. The purpose of this article is to offer a mapping of the technological architectures that support AIS, under the specific focus of the moral agency. Through a literature research and reflection process, the following areas are covered: a brief introduction and review of the literature on the topic of moral agency; an analysis using the BDI logic model (Bratman 1987); an elemental review of artificial ‘reasoning’ architectures in AIS; the influence of the data input and the data quality; AI systems’ positioning in decision support and decision making scenarios; and finally, some conclusions are offered about regarding the potential loss of moral control by humans due to AIS. This article contributes to the field of Ethics and Artificial Intelligence by providing a discussion for developers and researchers to understand how and under what circumstances the ‘human subject’ may, totally or partially, lose moral control and ownership over AI technologies. The topic is relevant because AIS often are not single machines but complex networks of machines that feed information and decisions into each other and to human operators. The detailed traceability of input-process-output at each node of the network is essential for it to remain within the field of moral agency. Moral agency is then at the basis of our system of legal responsibility, and social approval is unlikely to be obtained for entrusting important functions to complex systems under which no moral agency can be identified.
引用
收藏
页码:289 / 303
页数:14
相关论文
共 50 条
  • [1] Moral control and ownership in AI systems
    Gonzalez Fabre, Raul
    Camacho Ibanez, Javier
    Tejedor Escobar, Pedro
    AI & SOCIETY, 2021, 36 (01) : 289 - 303
  • [2] Moral Responsibility for AI Systems
    Beckers, Sander
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Moral consideration for AI systems by 2030
    Jeff Sebo
    Robert Long
    AI and Ethics, 2025, 5 (1): : 591 - 606
  • [4] MORAL IMPLICATION OF ACQUISITIVE INSTINCT UNDER SEPARATION OF OWNERSHIP AND CONTROL
    CHO, JH
    REVIEW OF SOCIAL ECONOMY, 1977, 35 (02) : 143 - 148
  • [5] Moral Status of AI Systems: Evaluation of the Genetic Account
    Kerkeling, Leonhard
    PHILOSOPHY AND THEORY OF ARTIFICIAL INTELLIGENCE 2021, 2022, 63 : 161 - 169
  • [6] The Role of Inclusion, Control, and Ownership in Workplace AI-Mediated Communication
    Kadoma, Kowe
    Le Quere, Marianne Aubin
    Fu, Xiyu Jenny
    Munsch, Christin
    Metaxa, Danae
    Naaman, Mor
    PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS (CHI 2024), 2024,
  • [7] Benchmarked Ethics: A Roadmap to AI Alignment, Moral Knowledge, and Control
    Kierans, Aidan
    PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023, 2023, : 964 - 965
  • [8] Ownership is (likely to be) a moral foundation
    Atari, Mohammad
    Haidt, Jonathan
    BEHAVIORAL AND BRAIN SCIENCES, 2023, 46
  • [9] JOINT OWNERSHIP OF MORAL RIGHTS
    KARLEN, PH
    JOURNAL OF THE COPYRIGHT SOCIETY OF THE USA, 1991, 38 (04): : 242 - 275
  • [10] OWNERSHIP AND THE MORAL SIGNIFICANCE OF THE SELF
    Tadros, Victor
    SOCIAL PHILOSOPHY & POLICY, 2019, 36 (02): : 51 - 70