OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning

被引:20
|
作者
Zhang, Hantian [1 ]
Chu, Xu [1 ]
Asudeh, Abolfazl [2 ]
Navathe, Shamkant B. [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Univ Illinois, Chicago, IL USA
关键词
Algorithmic Bias; Group Fairness; Declarative Systems;
D O I
10.1145/3448016.3452787
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) is increasingly being used to make decisions in our society. ML models, however, can be unfair to certain demographic groups (e.g., African Americans or females) according to various fairness metrics. Existing techniques for producing fair ML models either are limited to the type of fairness constraints they can handle (e.g., preprocessing) or require nontrivial modifications to downstream ML training algorithms (e.g., in-processing). We propose a declarative system Omni Fair for supporting group fairness in ML. Omni Fair features a declarative interface for users to specify desired group fairness constraints and supports all commonly used group fairness notions, including statistical parity, equalized odds, and predictive parity. Omni Fair is also model-agnostic in the sense that it does not require modifications to a chosen ML algorithm. Omni Fair also supports enforcing multiple user declared fairness constraints simultaneously while most previous techniques cannot. The algorithms in Omni Fair maximize model accuracy while meeting the specified fairness constraints, and their efficiency is optimized based on the theoretically provable monotonicity property regarding the trade-off between accuracy and fairness that is unique to our system. We conduct experiments on commonly used datasets that exhibit bias against minority groups in the fairness literature. We show that OmniFair is more versatile than existing algorithmic fairness approaches in terms of both supported fairness constraints and downstream ML models. OmniFair reduces the accuracy loss by up to 94.8% compared with the second best method. Omni Fair also achieves similar running time to preprocessing methods, and is up to 270x faster than in-processing methods.
引用
收藏
页码:2076 / 2088
页数:13
相关论文
共 50 条
  • [1] Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning
    Horesh, Yair
    Haas, Noa
    Mishraky, Elhanan
    Resheff, Yehezkel S.
    Lador, Shir Meir
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 1167 : 590 - 604
  • [2] Model-Agnostic Federated Learning
    Mittone, Gianluca
    Riviera, Walter
    Colonnelli, Iacopo
    Birke, Robert
    Aldinucci, Marco
    EURO-PAR 2023: PARALLEL PROCESSING, 2023, 14100 : 383 - 396
  • [3] Model-Agnostic Private Learning
    Bassily, Raef
    Thakkar, Om
    Thakurta, Abhradeep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] Model-Agnostic Dual-Side Online Fairness Learning for Dynamic Recommendation
    Tang, Haoran
    Wu, Shiqing
    Cui, Zhihong
    Li, Yicong
    Xu, Guandong
    Li, Qing
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (05) : 2727 - 2742
  • [5] Is Bayesian Model-Agnostic Meta Learning Better than Model-Agnostic Meta Learning, Provably?
    Chen, Lisha
    Chen, Tianyi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [6] Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models
    Zhang, Jiawei
    Wang, Yang
    Molino, Piero
    Li, Lezhi
    Ebert, David S.
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2019, 25 (01) : 364 - 373
  • [7] Boosting mono-jet searches with model-agnostic machine learning
    Finke, Thorben
    Kraemer, Michael
    Lipp, Maximilian
    Muck, Alexander
    JOURNAL OF HIGH ENERGY PHYSICS, 2022, 2022 (08)
  • [8] Boosting mono-jet searches with model-agnostic machine learning
    Thorben Finke
    Michael Krämer
    Maximilian Lipp
    Alexander Mück
    Journal of High Energy Physics, 2022
  • [9] General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models
    Molnar, Christoph
    Koenig, Gunnar
    Herbinger, Julia
    Freiesleben, Timo
    Dandl, Susanne
    Scholbeck, Christian A.
    Casalicchio, Giuseppe
    Grosse-Wentrup, Moritz
    Bischl, Bernd
    XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, 2022, 13200 : 39 - 68
  • [10] Fairness by "Where": A Statistically-Robust and Model-Agnostic Bi-level Learning Framework
    Xie, Yiqun
    He, Erhu
    Jia, Xiaowei
    Chen, Weiye
    Skakun, Sergii
    Bao, Han
    Jiang, Zhe
    Ghosh, Rahul
    Ravirathinam, Praveen
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12208 - 12216