The Paradox of Algorithms and Blame on Public Decision-makers

被引:1
|
作者
Ozer, Adam L. [1 ]
Waggoner, Philip D. [2 ]
Kennedy, Ryan [3 ]
机构
[1] Verian, London, England
[2] Columbia Univ, New York, NY USA
[3] Univ Houston, Houston, TX 77204 USA
关键词
algorithms; artificial intelligence; public policy; public opinion; experiments; BLACK-BOX; AUTOMATION; MEDIATION; EXPERT; TRUST;
D O I
10.1017/bap.2023.35
中图分类号
D81 [国际关系];
学科分类号
030207 ;
摘要
Public decision-makers incorporate algorithm decision aids, often developed by private businesses, into the policy process, in part, as a method for justifying difficult decisions. Ethicists have worried that over-trust in algorithm advice and concerns about punishment if departing from an algorithm's recommendation will result in over-reliance and harm democratic accountability. We test these concerns in a set of two pre-registered survey experiments in the judicial context conducted on three representative U.S. samples. The results show no support for the hypothesized blame dynamics, regardless of whether the judge agrees or disagrees with the algorithm. Algorithms, moreover, do not have a significant impact relative to other sources of advice. Respondents who are generally more trusting of elites assign greater blame to the decision-maker when they disagree with the algorithm, and they assign more blame when they think the decision-maker is abdicating their responsibility by agreeing with an algorithm.
引用
收藏
页码:200 / 217
页数:18
相关论文
共 50 条