MEASURING ALGORITHMIC FAIRNESS

被引:0
|
作者
Hellman, Deborah [1 ]
机构
[1] Univ Virginia, Sch Law, Law, Charlottesville, VA 22903 USA
关键词
BIAS;
D O I
暂无
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
Algorithmic decision making is both increasingly common and increasingly controversial. Critics worry that algorithmic tools are not transparent, accountable, or fair. Assessing the fairness of these tools has been especially fraught as it requires that we agree about what fairness is and what it requires. Unfortunately, we do not. The technological literature is now littered with a multitude of measures, each purporting to assess fairness along some dimension. Two types of measures stand out. According to one, algorithmic fairness requires that the score an algorithm produces should be equally accurate for members of legally protected groups blacks and whites, for example. According to the other, algorithmic fairness requires that the algorithm produce the same percentage of false positives or false negatives for each of the groups at issue. Unfortunately, there is often no way to achieve parity in both these dimensions. This fact has led to a pressing question. Which type of measure should we prioritize and why? This Article makes three contributions to the debate about how best to measure algorithmic fairness: one conceptual, one normative, and one legal. Equal predictive accuracy ensures that a score means the same thing for each group at issue. As such, it relates to what one ought to believe about a scored individual. Because questions of fairness usually relate to action, not belief this measure is ill-suited as a measure of fairness. This is the Article 's conceptual contribution. Second, this Article argues that parity in the ratio of false positives to false negatives is a normatively significant measure. While a lack of parity in this dimension is not constitutive of unfairness, this measure provides important reasons to suspect that unfairness exists. This is the Article 's normative contribution. Interestingly, improving the accuracy of algorithms overall will lessen this unfairness. Unfortunately, a common assumption that anti-discrimination law prohibits the use of racial and other protected classifications in all contexts is inhibiting those who design algorithms from making them as fair and accurate as possible. This Article 's third contribution is to show that the law poses less of a barrier than many assume.
引用
收藏
页码:811 / 866
页数:56
相关论文
共 50 条
  • [31] ALGORITHMIC FAIRNESS AS AN INCONSISTENT CONCEPT
    Hummel, Patrik
    AMERICAN PHILOSOPHICAL QUARTERLY, 2025, 62 (01) : 53 - 68
  • [32] An Epistemic Lens on Algorithmic Fairness
    Edenberg, Elizabeth
    Wood, Alexandra
    PROCEEDINGS OF 2023 ACM CONFERENCE ON EQUITY AND ACCESS IN ALGORITHMS, MECHANISMS, AND OPTIMIZATION, EAAMO 2023, 2023,
  • [33] Broomean(ish) Algorithmic Fairness?
    Castro, Clinton
    JOURNAL OF APPLIED PHILOSOPHY, 2024,
  • [35] Active Fairness in Algorithmic Decision Making
    Noriega-Campero, Alejandro
    Bakker, Michiel A.
    Garcia-Bulle, Bernardo
    Pentland, Alex 'Sandy'
    AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, : 77 - 83
  • [36] A Qualitative Exploration of Perceptions of Algorithmic Fairness
    Woodruff, Allison
    Fox, Sarah E.
    Rousso-Schindler, Steven
    Warshaw, Jeff
    PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
  • [37] Algorithmic Fairness: Choices, Assumptions, and Definitions
    Mitchell, Shira
    Potash, Eric
    Barocas, Solon
    D'Amour, Alexander
    Lum, Kristian
    ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, VOL 8, 2021, 2021, 8 : 141 - 163
  • [38] Fairness and algorithmic decision-making
    Giovanola, Benedetta
    Tiribelli, Simona
    TEORIA-RIVISTA DI FILOSOFIA, 2022, 42 (02): : 117 - 129
  • [39] Equalized odds is a requirement of algorithmic fairness
    Grant, David Gray
    SYNTHESE, 2023, 201 (03)
  • [40] Algorithmic Fairness in AI An Interdisciplinary View
    Pfeiffer, Jella
    Gutschow, Julia
    Haas, Christian
    Moeslein, Florian
    Maspfuhl, Oliver
    Borgers, Frederik
    Alpsancar, Suzana
    BUSINESS & INFORMATION SYSTEMS ENGINEERING, 2023, 65 (02) : 209 - 222