Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

被引:30
|
作者
Tolmeijer, Suzanne [1 ]
Christen, Markus [1 ]
Kandul, Serhiy [1 ]
Kneer, Markus [1 ]
Bernstein, Abraham [1 ]
机构
[1] Univ Zurich, Zurich, Switzerland
基金
瑞士国家科学基金会;
关键词
Ethical AI; Trust; Responsibility; Human-AI Collaboration; TRUST; RESPONSIBILITY; AUTOMATION; RISK; DIFFUSION; AVERSION; TROLLEY; FUTURE; CARE; INDIVIDUALS;
D O I
10.1145/3491102.3517732
中图分类号
学科分类号
摘要
While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants' reliance on AI: AI recommendations and decisions are accepted more often than the human expert's. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Ethics gap: Comparing marketers with consumers on important determinants of ethical decision-making
    Singhapakdi, A
    Vitell, SJ
    Rao, CP
    Kurtz, DL
    JOURNAL OF BUSINESS ETHICS, 1999, 21 (04) : 317 - 328
  • [42] Ethics Gap: Comparing Marketers with Consumers on Important Determinants of Ethical Decision-Making
    Anusorn Singhapakdi
    Scott J. Vitell
    C. P. Rao
    David L. Kurtz
    Journal of Business Ethics, 1999, 21 : 317 - 328
  • [43] Comparing Two Teaching Methods on Nursing Students' Ethical Decision-Making Level
    Basak, Tulay
    Cerit, Birgul
    CLINICAL SIMULATION IN NURSING, 2019, 29 : 15 - 23
  • [44] A new model for calculating human trust behavior during human-AI collaboration in multiple decision-making tasks: A Bayesian approach
    Ding, Song
    Pan, Xing
    Hu, Lunhu
    Liu, Lingze
    COMPUTERS & INDUSTRIAL ENGINEERING, 2025, 200
  • [45] The roles of AI and educational leaders in AI-assisted administrative decision-making: a proposed framework for symbiotic collaboration
    Dai, Ruixun
    Thomas, Matthew Krehl Edward
    Rawolle, Shaun
    AUSTRALIAN EDUCATIONAL RESEARCHER, 2025, 52 (02): : 1471 - 1487
  • [46] STRATEGIES AND BIASES IN HUMAN DECISION-MAKING AND THEIR IMPLICATIONS FOR EXPERT SYSTEMS
    JACOB, VS
    GAULTNEY, LD
    SALVENDY, G
    BEHAVIOUR & INFORMATION TECHNOLOGY, 1986, 5 (02) : 119 - 140
  • [47] AI-based clinical decision-making systems in palliative medicine: ethical challenges
    De Panfilis, Ludovica
    Peruselli, Carlo
    Tanzi, Silvia
    Botrugno, Carlo
    BMJ SUPPORTIVE & PALLIATIVE CARE, 2023, 13 (02) : 183 - 189
  • [48] Privacy-preserving Crowd-guided AI Decision-making in Ethical Dilemmas
    Wang, Teng
    Zhao, Jun
    Yu, Han
    Liu, Jinyan
    Yang, Xinyu
    Ren, Xuebin
    Shi, Shuyu
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 1311 - 1320
  • [49] Ethical Considerations in AI and ML: Addressing Bias, Fairness, and Accountability in Algorithmic Decision-Making
    Turner, Michael
    Wong, Emily
    CINEFORUM, 2024, 65 (03): : 144 - 147
  • [50] Can AI systems meet the ethical requirements of professional decision-making in health care?
    Alan Gillies
    Peter Smith
    AI and Ethics, 2022, 2 (1): : 41 - 47