In AI we trust? Perceptions about automated decision-making by artificial intelligence

被引:6
|
作者
Theo Araujo
Natali Helberger
Sanne Kruikemeier
Claes H. de Vreese
机构
[1] University of Amsterdam,Amsterdam School of Communication Research (ASCoR)
[2] University of Amsterdam,Institute for Information Law (IViR)
来源
AI & SOCIETY | 2020年 / 35卷
关键词
Automated decision-making; Artificial intelligence; Algorithmic fairness; Algorithmic appreciation; User perceptions;
D O I
暂无
中图分类号
学科分类号
摘要
Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial contexts. Data from a scenario-based survey experiment with a national sample (N = 958) show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed.
引用
收藏
页码:611 / 623
页数:12
相关论文
共 50 条
  • [31] Application of Artificial Intelligence in Administrative Decision-making
    Csaba, Fasi
    PROCEEDINGS OF CENTRAL AND EASTERN EUROPEAN EDEM AND EGOV DAYS 2022, CEEE GOV DAYS 2022: Hate Speech and Fake News-Fate or Issue to Tackle?, 2022, : 147 - 152
  • [33] In artificial intelligence (AI) we trust: A qualitative investigation of AI technology acceptance
    Hasija, Abhinav
    Esper, Terry L.
    JOURNAL OF BUSINESS LOGISTICS, 2022, 43 (03) : 388 - 412
  • [34] Surveying Hematologists' Perceptions and Readiness to Embrace Artificial Intelligence in Diagnosis and Treatment Decision-Making
    Alanzi, Turki
    Alanazi, Fehaid
    Mashhour, Bushra
    Altalhi, Rahaf
    Alghamdi, Atheer
    Al Shubbar, Mohammed
    Alamro, Saud
    Alshammari, Muradi
    Almusmili, Lamyaa
    Alanazi, Lena
    Alzahrani, Saleh
    Alalouni, Raneem
    Alanzi, Nouf
    Alsharifa, Ali
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (11)
  • [35] Humans vs. AI: The Role of Trust, Political Attitudes, and Individual Characteristics on Perceptions about Automated Decision Making Across Europe
    Araujo, Theo
    Brosius, Anna
    Goldberg, Andreas C.
    Moeller, Judith
    de Vreese, Claes H.
    INTERNATIONAL JOURNAL OF COMMUNICATION, 2023, 17 : 6222 - 6249
  • [36] Dark sides of artificial intelligence: The dangers of automated decision-making in search engine advertising
    Schultz, Carsten D. D.
    Koch, Christian
    Olbrich, Rainer
    JOURNAL OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY, 2024, 75 (05) : 550 - 566
  • [37] Artificial fairness? Trust in algorithmic police decision-making
    Hobson, Zoe
    Yesberg, Julia A.
    Bradford, Ben
    Jackson, Jonathan
    JOURNAL OF EXPERIMENTAL CRIMINOLOGY, 2023, 19 (01) : 165 - 189
  • [38] Artificial fairness? Trust in algorithmic police decision-making
    Zoë Hobson
    Julia A. Yesberg
    Ben Bradford
    Jonathan Jackson
    Journal of Experimental Criminology, 2023, 19 : 165 - 189
  • [39] A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
    Schemmer, Max
    Hemmer, Patrick
    Nitsche, Maximilian
    Kuehl, Niklas
    Voessing, Michael
    PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, : 617 - 626
  • [40] Authentic intelligence: Automated decision-making through GTSM
    Wodraska, J
    Hampson, J
    JOURNAL AMERICAN WATER WORKS ASSOCIATION, 2005, 97 (11): : 75 - +