Moral control and ownership in AI systems

被引:0
|
作者
Raul Gonzalez Fabre
Javier Camacho Ibáñez
Pedro Tejedor Escobar
机构
[1] Universidad Pontificia Comillas,
[2] Instituto de Ingeniería del Conocimiento,undefined
来源
AI & SOCIETY | 2021年 / 36卷
关键词
Artificial Intelligence; Moral agency; Data bias; Machine learning; Autonomous systems; Decision support;
D O I
暂无
中图分类号
学科分类号
摘要
AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the public and private organisations. The purpose of this article is to offer a mapping of the technological architectures that support AIS, under the specific focus of the moral agency. Through a literature research and reflection process, the following areas are covered: a brief introduction and review of the literature on the topic of moral agency; an analysis using the BDI logic model (Bratman 1987); an elemental review of artificial ‘reasoning’ architectures in AIS; the influence of the data input and the data quality; AI systems’ positioning in decision support and decision making scenarios; and finally, some conclusions are offered about regarding the potential loss of moral control by humans due to AIS. This article contributes to the field of Ethics and Artificial Intelligence by providing a discussion for developers and researchers to understand how and under what circumstances the ‘human subject’ may, totally or partially, lose moral control and ownership over AI technologies. The topic is relevant because AIS often are not single machines but complex networks of machines that feed information and decisions into each other and to human operators. The detailed traceability of input-process-output at each node of the network is essential for it to remain within the field of moral agency. Moral agency is then at the basis of our system of legal responsibility, and social approval is unlikely to be obtained for entrusting important functions to complex systems under which no moral agency can be identified.
引用
收藏
页码:289 / 303
页数:14
相关论文
共 50 条
  • [21] Autonomy, the moral circle, and the limits of ownership
    Starmans, Christina
    BEHAVIORAL AND BRAIN SCIENCES, 2023, 46
  • [22] AI, Robots and IPRs - An Approach to Ownership
    Mediano Cortes, S. D.
    INCLUSIVE ROBOTICS FOR A BETTER SOCIETY, INBOTS 2018, 2020, 25 : 25 - 30
  • [23] A framework for the development of hybrid AI control systems
    Graves, AR
    Czarnecki, CA
    SOFT COMPUTING TECHNIQUES AND APPLICATIONS, 2000, : 63 - 68
  • [24] Assistance Systems with Success Control through AI
    Folz M.
    Baumgärtel F.
    ZWF Zeitschrift fuer Wirtschaftlichen Fabrikbetrieb, 2022, 117 (06): : 423 - 426
  • [25] Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors
    Constantinescu M.
    Vică C.
    Uszkai R.
    Voinea C.
    Philosophy & Technology, 2022, 35 (2)
  • [26] A moral map for AI cars
    Maxmen, Amy
    NATURE, 2018, 562 (7728) : 469 - 470
  • [27] Granting Non-AI Experts Creative Control Over AI Systems
    Lam, Michelle S.
    PROCEEDINGS OF THE 37TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, UIST ADJUNCT 2024, 2024,
  • [28] Beyond AI Ownership; or, the Continuing Problem of Software Patenting in the AI Landscape
    Ghosh, Shubha
    AUSTRALIAN INTELLECTUAL PROPERTY JOURNAL, 2022, 33 (02) : 59 - 72
  • [29] OWNERSHIP OF THE SELF AND MORAL INDIFFERENCE OF THE RELATION TO THE SELF
    Romagnoli, Nathalie Maillard
    ATELIERS DE L ETHIQUE-THE ETHICS FORUM, 2011, 6 (01): : 4 - 15
  • [30] OWNERSHIP OF THE SELF AND MORAL INDIFFERENCE OF THE RELATION TO THE SELF
    Schroeter, Francois
    ATELIERS DE L ETHIQUE-THE ETHICS FORUM, 2009, 4 (01): : 4 - 19