Participation, prediction, and publicity: avoiding the pitfalls of applying Rawlsian ethics to AI

被引:0
|
作者
Morten Bay [1 ]
机构
[1] University of Southern California,Annenberg School for Communication and Journalism
来源
AI and Ethics | 2024年 / 4卷 / 4期
关键词
John Rawls; AI regulation; AI policy; Prediction; Democratic participation; Difference principle;
D O I
10.1007/s43681-023-00341-1
中图分类号
学科分类号
摘要
Given the popularity of John Rawls’ theory of justice as fairness as an ethical framework in the artificial intelligence (AI) field, this article examines how the theory fits with three different conceptual applications of AI technology. First, the article discusses a proposition by Ashrafian to let an AI agent perform the deliberation that produces a Rawlsian social contract governing humans. The discussion demonstrates the inviability of such an application as it contradicts foundational aspects of Rawls’ theories. An exploration of more viable applications of Rawlsian theory in the AI context follows, introducing the distinction between intrinsic and extrinsic theoretical adherence, i.e., the difference between approaches integrating Rawlsian theory in the system design and those situating AI systems in Rawls-consistent policy/legislative frameworks. The article uses emerging AI legislation in the EU and the U.S. as well as Gabriel’s argument for adopting Rawls’ publicity criterion in the AI field as examples of extrinsic adherence to Rawlsian theory. A discussion of the epistemological challenges of predictive AI systems then illustrates some implications of intrinsic adherence to Rawlsian theory. While AI systems can make short-term predictions about human behavior with intrinsic adherence to Rawls’ theory of justice as fairness, long-term, large-scale predictions results do not adhere to the theory, but instead constitute the type of utilitarianism Rawls vehemently opposed. The article concludes with an overview of the implications of these arguments for policymakers and regulators.
引用
收藏
页码:1545 / 1554
页数:9
相关论文
共 26 条
  • [21] Avoiding the pitfalls in developing and validating prediction models for rare outcomes in nested case-control
    Rentroia-Pacheco, Barbara
    Bellomo, Domenico
    Lakeman, Inge M. M.
    Wakkee, Marlies
    Hollestein, Loes M.
    van Klaveren, David
    CANCER RESEARCH, 2024, 84 (06)
  • [22] Protein structure prediction in the era of AI: Challenges and limitations when applying to in silico force spectroscopy
    Gomes, Priscila S. F. C.
    Gomes, Diego E. B.
    Bernardi, Rafael C.
    FRONTIERS IN BIOINFORMATICS, 2022, 2
  • [23] Applying benefits and avoiding pitfalls of 3D computational modeling-based machine learning prediction for exploration targeting: Lessons from two mines in the Tongling-Anqing district, eastern China
    Liu, Liangming
    Cao, Wei
    Liu, Hongsheng
    Ord, Alison
    Qin, Yaozu
    Zhou, Feihu
    Bi, Chenxi
    ORE GEOLOGY REVIEWS, 2022, 142
  • [24] Developing AI for Weather Prediction: Ethics of Design and Anxieties about Automation at the US Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography
    Lukacz, Przemyslaw Matt
    SCIENCE AND TECHNOLOGY STUDIES, 2024, 37 (04): : 40 - 61
  • [25] Applying Ethics of Care-Based Response Strategies to Mitigate AI-Related Corporate Crisis: The Moderating Role of Crisis Involvement
    Liu, Juan
    INTERNATIONAL JOURNAL OF STRATEGIC COMMUNICATION, 2025,
  • [26] Applying an expert system into constructing customer's value expansion and prediction model based on AI techniques in leisure industry
    Hsieh, Kun-Lin
    EXPERT SYSTEMS WITH APPLICATIONS, 2009, 36 (02) : 2864 - 2872