algorithms;
artificial intelligence;
public policy;
public opinion;
experiments;
BLACK-BOX;
AUTOMATION;
MEDIATION;
EXPERT;
TRUST;
D O I:
10.1017/bap.2023.35
中图分类号:
D81 [国际关系];
学科分类号:
030207 ;
摘要:
Public decision-makers incorporate algorithm decision aids, often developed by private businesses, into the policy process, in part, as a method for justifying difficult decisions. Ethicists have worried that over-trust in algorithm advice and concerns about punishment if departing from an algorithm's recommendation will result in over-reliance and harm democratic accountability. We test these concerns in a set of two pre-registered survey experiments in the judicial context conducted on three representative U.S. samples. The results show no support for the hypothesized blame dynamics, regardless of whether the judge agrees or disagrees with the algorithm. Algorithms, moreover, do not have a significant impact relative to other sources of advice. Respondents who are generally more trusting of elites assign greater blame to the decision-maker when they disagree with the algorithm, and they assign more blame when they think the decision-maker is abdicating their responsibility by agreeing with an algorithm.
机构:
Cardiff Univ, Cardiff Sch Social Sci, Ctr Dev & Evaluat Complex Intervent Publ Hlth Imp, Cardiff CF10 3AX, S Glam, Wales
Cardiff Univ, Univ Lib Serv, SURE, Cardiff CF10 3AX, S Glam, WalesQueensland Hlth, Metro North Publ Hlth Unit, Brisbane, Qld, Australia
Turley, Ruth
论文数: 引用数:
h-index:
机构:
Thomson, Hilary
论文数: 引用数:
h-index:
机构:
Weightman, Alison
Waters, Elizabeth
论文数: 0引用数: 0
h-index: 0
机构:
Univ Melbourne, Melbourne Sch Populat & Global Hlth, Cochrane Publ Hlth Grp, Publ Hlth Insight, Melbourne, Vic, AustraliaQueensland Hlth, Metro North Publ Hlth Unit, Brisbane, Qld, Australia