Moral control and ownership in AI systems

被引:0
|
作者
Raul Gonzalez Fabre
Javier Camacho Ibáñez
Pedro Tejedor Escobar
机构
[1] Universidad Pontificia Comillas,
[2] Instituto de Ingeniería del Conocimiento,undefined
来源
AI & SOCIETY | 2021年 / 36卷
关键词
Artificial Intelligence; Moral agency; Data bias; Machine learning; Autonomous systems; Decision support;
D O I
暂无
中图分类号
学科分类号
摘要
AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the public and private organisations. The purpose of this article is to offer a mapping of the technological architectures that support AIS, under the specific focus of the moral agency. Through a literature research and reflection process, the following areas are covered: a brief introduction and review of the literature on the topic of moral agency; an analysis using the BDI logic model (Bratman 1987); an elemental review of artificial ‘reasoning’ architectures in AIS; the influence of the data input and the data quality; AI systems’ positioning in decision support and decision making scenarios; and finally, some conclusions are offered about regarding the potential loss of moral control by humans due to AIS. This article contributes to the field of Ethics and Artificial Intelligence by providing a discussion for developers and researchers to understand how and under what circumstances the ‘human subject’ may, totally or partially, lose moral control and ownership over AI technologies. The topic is relevant because AIS often are not single machines but complex networks of machines that feed information and decisions into each other and to human operators. The detailed traceability of input-process-output at each node of the network is essential for it to remain within the field of moral agency. Moral agency is then at the basis of our system of legal responsibility, and social approval is unlikely to be obtained for entrusting important functions to complex systems under which no moral agency can be identified.
引用
收藏
页码:289 / 303
页数:14
相关论文
共 50 条
  • [31] Towards a structural ownership condition on moral responsibility
    Matheson, Benjamin
    CANADIAN JOURNAL OF PHILOSOPHY, 2019, 49 (04) : 458 - 480
  • [32] THE MORAL RULES OF TRASH TALKING: MORALITY AND OWNERSHIP
    Kershnar, Stephen
    SPORT ETHICS AND PHILOSOPHY, 2015, 9 (03) : 303 - 323
  • [33] TOOLS FOR COUPLING AI AND CONTROL-SYSTEMS CAD
    HYOTYNIEMI, H
    LECTURE NOTES IN COMPUTER SCIENCE, 1992, 585 : 652 - 667
  • [34] Advancing systems and control research in the era of ML and AI
    Khargonekar, Pramod P.
    Dahleh, Munther A.
    ANNUAL REVIEWS IN CONTROL, 2018, 45 : 1 - 4
  • [35] Big Data & AI: Opportunity for modern control Systems
    Hutterer, S.
    ELEKTROTECHNIK UND INFORMATIONSTECHNIK, 2021, 138 (08): : 648 - 651
  • [36] Activation Control of Vision Models for Sustainable AI Systems
    Burton-Barr J.
    Fernando B.
    Rajan D.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 3470 - 3481
  • [37] DELPHI: AI & HUMAN MORAL JUDGEMENT
    不详
    ANTHROPOLOGY TODAY, 2025, 41 (01)
  • [38] The Ethics of AI and The Moral Responsibility of Philosophers
    Boddington, Paula
    TPM-THE PHILOSOPHERS MAGAZINE, 2020, (89): : 62 - 68
  • [39] Moral distance, AI, and the ethics of care
    Villegas-Galaviz, Carolina
    Martin, Kirsten
    AI & SOCIETY, 2024, 39 (04) : 1695 - 1706
  • [40] Emergent Models for Moral AI Spirituality
    Graves, Mark
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2021, 7 (01): : 7 - 15