Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

被引:5
|
作者
Zerilli J. [1 ]
Knott A. [2 ]
Maclaurin J. [1 ]
Gavaghan C. [3 ]
机构
[1] Department of Philosophy, University of Otago, Dunedin
[2] Department of Computer Science, University of Otago, Dunedin
[3] Faculty of Law, University of Otago, Dunedin
关键词
Algorithmic decision-making; Explainable AI; Intentional stance; Transparency;
D O I
10.1007/s13347-018-0330-6
中图分类号
学科分类号
摘要
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool. © 2018, Springer Nature B.V.
引用
收藏
页码:661 / 683
页数:22
相关论文
共 50 条
  • [31] THE "BLACK BOX" OF JUDICIAL DECISION-MAKING: BETWEEN HUMAN AND ALGORITHMIC JUDGEMENT
    Arduini, Sonia
    BIOLAW JOURNAL-RIVISTA DI BIODIRITTO, 2021, (02): : 453 - 470
  • [32] Province of Origin, Decision-Making Bias, and Responses to Bureaucratic Versus Algorithmic Decision-Making
    Wang, Ge
    Zhang, Zhejun
    Xie, Shenghua
    Guo, Yue
    PUBLIC ADMINISTRATION REVIEW, 2025,
  • [33] Algorithmic Driven Decision-Making Systems in Education
    Ferrero, Federico
    Gewerc, Adriana
    2019 XIV LATIN AMERICAN CONFERENCE ON LEARNING TECHNOLOGIES (LACLO 2019), 2020, : 166 - 173
  • [34] The value of responsibility gaps in algorithmic decision-making
    Munch, Lauritz
    Mainz, Jakob
    Bjerring, Jens Christian
    ETHICS AND INFORMATION TECHNOLOGY, 2023, 25 (01)
  • [35] Fairness, Equality, and Power in Algorithmic Decision-Making
    Kasy, Maximilian
    Abebe, Rediet
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 576 - 586
  • [36] A RIGHT TO AN EXPLANATION OF ALGORITHMIC DECISION-MAKING IN CHINA
    Lin, Huanmin
    Wu, Hong
    HONG KONG LAW JOURNAL, 2022, 52 : 1163 - +
  • [37] The value of responsibility gaps in algorithmic decision-making
    Lauritz Munch
    Jakob Mainz
    Jens Christian Bjerring
    Ethics and Information Technology, 2023, 25
  • [38] ALGORITHMIC STRUCTURING OF DIALOG DECISION-MAKING SYSTEMS
    ARAKSYAN, VV
    ENGINEERING CYBERNETICS, 1984, 22 (04): : 120 - 124
  • [39] Pushing the Limits of Fairness in Algorithmic Decision-Making
    Shah, Nisarg
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 7051 - 7056
  • [40] On the Impact of Explanations on Understanding of Algorithmic Decision-Making
    Schmude, Timothee
    Koesten, Laura
    Moeller, Torsten
    Tschiatschek, Sebastian
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 959 - 970