Costs and Benefits of Fair Representation Learning

被引:21
|
作者
McNamara, Daniel [1 ,2 ]
Ong, Cheng Soon [1 ,2 ]
Williamson, Robert C. [1 ,2 ]
机构
[1] Australian Natl Univ, Canberra, ACT, Australia
[2] CSIRO Data6l, Canberra, ACT, Australia
关键词
fairness; representation learning; machine learning;
D O I
10.1145/3306618.3317964
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning algorithms are increasingly used to make or support important decisions about people's lives. This has led to interest in the problem of fair classification, which involves learning to make decisions that are non-discriminatory with respect to a sensitive variable such as race or gender. Several methods have been proposed to solve this problem, including fair representation learning, which cleans the input data used by the algorithm to remove information about the sensitive variable. We show that using fair representation learning as an intermediate step in fair classification incurs a cost compared to directly solving the problem, which we refer to as the cost of mistrust. We show that fair representation learning in fact addresses a different problem, which is of interest when the data user is not trusted to access the sensitive variable. We quantify the benefits of fair representation learning, by showing that any subsequent use of the cleaned data will not be too unfair. The benefits we identify result from restricting the decisions of adversarial data users, while the costs are due to applying those same restrictions to other data users.
引用
收藏
页码:263 / 270
页数:8
相关论文
共 50 条
  • [1] The representation of costs and benefits
    Rychlik, R
    MEDIZINISCHE WELT, 1996, 47 (11): : 57 - 58
  • [2] Efficient fair PCA for fair representation learning
    Kleindessner, Matthaeus
    Donini, Michele
    Russell, Chris
    Zafar, Muhammad Bilal
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [3] Fair Representation Learning with Unreliable Labels
    Zhang, Yixuan
    Zhou, Feng
    Li, Zhidong
    Wang, Yang
    Chen, Fang
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [4] Flexibly Fair Representation Learning by Disentanglement
    Creager, Elliot
    Madras, David
    Jacobsen, Joern-Henrik
    Weis, Marissa A.
    Swersky, Kevin
    Pitassi, Toniann
    Zemel, Richard
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] A fair share? Sharing the benefits and costs of collaborative forest management
    Mahanty, S.
    Guernier, J.
    Yasmi, Y.
    INTERNATIONAL FORESTRY REVIEW, 2009, 11 (02) : 268 - 280
  • [6] Fair weather avoidance: unpacking the costs and benefits of “Avoiding the Ask”
    Hannah Trachtman
    Andrew Steinkruger
    Mackenzie Wood
    Adam Wooster
    James Andreoni
    James J. Murphy
    Justin M. Rao
    Journal of the Economic Science Association, 2015, 1 (1) : 8 - 14
  • [7] Fair weather avoidance: unpacking the costs and benefits of "Avoiding the Ask''
    Trachtman, Hannah
    Steinkruger, Andrew
    Wood, Mackenzie
    Wooster, Adam
    Andreoni, James
    Murphy, James J.
    Rao, Justin M.
    JOURNAL OF THE ECONOMIC SCIENCE ASSOCIATION-JESA, 2015, 1 (01): : 8 - 14
  • [8] SoFaiR: Single Shot Fair Representation Learning
    Gitiaux, Xavier
    Rangwala, Huzefa
    PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, : 687 - 695
  • [9] Fair Benchmark for Unsupervised Node Representation Learning
    Guo, Zhihao
    Chen, Shengyuan
    Huang, Xiao
    Qian, Zhiqiang
    Yu, Chunsing
    Xu, Yan
    Ding, Fang
    ALGORITHMS, 2022, 15 (10)
  • [10] Fair Representation Learning: An Alternative to Mutual Information
    Liu, Ji
    Li, Zenan
    Yao, Yuan
    Xu, Feng
    Ma, Xiaoxing
    Xu, Miao
    Tong, Hanghang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1088 - 1097