Learning Fair Representations for Kernel Models

被引:0
|
作者
Tan, Zilong [1 ]
Yeom, Samuel [1 ]
Fredrikson, Matt [1 ]
Talwalkar, Ameet [1 ,2 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Determined AI, San Francisco, CA USA
基金
美国国家科学基金会;
关键词
SLICED INVERSE REGRESSION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fair representations are a powerful tool for satisfying fairness goals such as statistical parity and equality of opportunity in learned models. Existing techniques for learning these representations are typically model-agnostic, as they pre-process the original data such that the output satisfies some fairness criterion, and can be used with arbitrary learning methods. In contrast, we demonstrate the promise of learning a model-aware fair representation, focusing on kernel-based models. We leverage the classical sufficient dimension reduction (SDR) framework to construct representations as subspaces of the reproducing kernel Hilbert space (RKHS), whose member functions are guaranteed to satisfy a given fairness criterion. Our method supports several fairness criteria, continuous and discrete data, and multiple protected attributes. We also characterize the fairness-accuracy trade-off with a parameter that relates to the principal angles between subspaces of the RKHS. Finally, we apply our approach to obtain the first fair Gaussian process (FGP) prior for fair Bayesian learning, and show that it is competitive with, and in some cases outperforms, state-of-the-art methods on real data.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Learning Fair Representations through Uniformly Distributed Sensitive Attributes
    Kenfack, Patrik Joslin
    Rivera, Adin Ramirez
    Khan, Adil Mehmood
    Mazzara, Manuel
    2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 58 - 67
  • [32] Learning Fair Representations for Recommendation via Information Bottleneck Principle
    Xie, Junsong
    Yang, Yonghui
    Wang, Zihan
    Wu, Le
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 2469 - 2477
  • [33] Learning Fair Representations for Recommendation: A Graph-based Perspective
    Wu, Le
    Chen, Lei
    Shao, Pengyang
    Hong, Richang
    Wang, Xiting
    Wang, Meng
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 2198 - 2208
  • [34] Learning Fair Representations via Rate-Distortion Maximization
    Chowdhury, Somnath Basu Roy
    Chaturvedi, Snigdha
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 1159 - 1174
  • [35] Contrastive Learning Models for Sentence Representations
    Xu, Lingling
    Xie, Haoran
    Li, Zongxi
    Wang, Fu Lee
    Wang, Weiming
    Li, Qing
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (04)
  • [36] Data representations and generalization error in kernel based learning machines
    Ancona, Nicola
    Maglietta, Rosalia
    Stella, Ettore
    PATTERN RECOGNITION, 2006, 39 (09) : 1588 - 1603
  • [37] Low-Rank Kernel Space Representations in Prototype Learning
    Bunte, Kerstin
    Kaden, Marika
    Schleif, Frank-Michael
    ADVANCES IN SELF-ORGANIZING MAPS AND LEARNING VECTOR QUANTIZATION, WSOM 2016, 2016, 428 : 341 - 353
  • [38] Primal and dual model representations in kernel-based learning
    Suykens, Johan A. K.
    Alzate, Carlos
    Pelckmans, Kristiaan
    STATISTICS SURVEYS, 2010, 4 : 148 - 183
  • [39] Fair Generative Models via Transfer Learning
    Teo, Christopher T. H.
    Abdollahzadeh, Milad
    Cheung, Ngai-Man
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 2429 - 2437
  • [40] Fair Representations by Compression
    Gitiaux, Xavier
    Rangwala, Huzefa
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 11506 - 11515