Artificial fairness? Trust in algorithmic police decision-making

被引:14
|
作者
Hobson, Zoe [1 ]
Yesberg, Julia A. [1 ]
Bradford, Ben [1 ]
Jackson, Jonathan [2 ,3 ]
机构
[1] UCL, Inst Global City Policing, Dept Secur & Crime Sci, 35 Tavistock Sq, London WC1H 9EZ, England
[2] London Sch Econ & Polit Sci, Dept Methodol, London, England
[3] Sydney Law Sch, Sydney, NSW, Australia
关键词
Algorithms; Fairness; Police decision-making; Technology; Trust; BODY-WORN CAMERAS; PROCEDURAL JUSTICE; PUBLIC SUPPORT; LEGITIMACY; COOPERATION;
D O I
10.1007/s11292-021-09484-9
中图分类号
DF [法律]; D9 [法律];
学科分类号
0301 ;
摘要
Objectives Test whether (1) people view a policing decision made by an algorithm as more or less trustworthy than when an officer makes the same decision; (2) people who are presented with a specific instance of algorithmic policing have greater or lesser support for the general use of algorithmic policing in general; and (3) people use trust as a heuristic through which to make sense of an unfamiliar technology like algorithmic policing. Methods An online experiment tested whether different decision-making methods, outcomes and scenario types affect judgements about the appropriateness and fairness of decision-making and the general acceptability of police use of this particular technology. Results People see a decision as less fair and less appropriate when an algorithm decides, compared to when an officer decides. Yet, perceptions of fairness and appropriateness were strong predictors of support for police use of algorithms, and being exposed to a successful use of an algorithm was linked, via trust in the decision made, to greater support for police use of algorithms. Conclusions Making decisions solely based on algorithms might damage trust, and the more police rely solely on algorithmic decision-making, the less trusting people may be in decisions. However, mere exposure to the successful use of algorithms seems to enhance the general acceptability of this technology.
引用
收藏
页码:165 / 189
页数:25
相关论文
共 50 条
  • [21] Assuring Fairness of Algorithmic Decision Making
    Hauer, Marc P.
    Adler, Rasmus
    Zweig, Katharina
    2021 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2021), 2021, : 110 - 113
  • [22] Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust
    Dowding K.
    Taylor B.R.
    Philosophy & Technology, 2024, 37 (2)
  • [23] Responsible algorithmic decision-making
    Breidbach, Christoph F.
    ORGANIZATIONAL DYNAMICS, 2024, 53 (02)
  • [24] Algorithmic Decision-Making Framework
    Kissell, Robert
    Malamut, Roberto
    JOURNAL OF TRADING, 2006, 1 (01): : 12 - 21
  • [25] Disentangling Fairness Perceptions in Algorithmic Decision-Making: the Effects of Explanations, Human Oversight, and Contestability
    Yurrita, Mireia
    Draws, Tim
    Balayn, Agathe
    Murray-Rust, Dave
    Tintarev, Nava
    Bozzon, Alessandro
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2023), 2023,
  • [26] TRUST, AUTOMATION BIAS AND AVERSION: ALGORITHMIC DECISION-MAKING IN THE CONTEXT OF CREDIT SCORING
    Gsenger, Rita
    Strle, Toma
    INTERDISCIPLINARY DESCRIPTION OF COMPLEX SYSTEMS, 2021, 19 (04) : 540 - 558
  • [27] Ethical Considerations in AI and ML: Addressing Bias, Fairness, and Accountability in Algorithmic Decision-Making
    Turner, Michael
    Wong, Emily
    CINEFORUM, 2024, 65 (03): : 144 - 147
  • [28] Trust in Decision-Making Authorities Dictates the Form of the Interactive Relationship Between Outcome Fairness and Procedural Fairness
    Bianchi, Emily C.
    Brockner, Joel
    van den Bos, Kees
    Seifert, Matthias
    Moon, Henry
    van Dijke, Marius
    De Cremer, David
    PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN, 2015, 41 (01) : 19 - 34
  • [29] Participation in algorithmic administrative decision-making
    Ramotti, Camilla
    BIOLAW JOURNAL-RIVISTA DI BIODIRITTO, 2024, (03): : 455 - 476
  • [30] Algorithmic legitimacy in clinical decision-making
    Holm, Sune
    ETHICS AND INFORMATION TECHNOLOGY, 2023, 25 (03)