Mitigating Algorithmic Bias with Limited Annotations

被引:0
|
作者
Wang, Guanchu [1 ]
Du, Mengnan [2 ]
Liu, Ninghao [3 ]
Zou, Na [4 ]
Hu, Xia [1 ]
机构
[1] Rice Univ, Houston, TX 77005 USA
[2] New Jersey Inst Technol, Newark, NJ 07102 USA
[3] Univ Georgia, Athens, GA 30602 USA
[4] Texas A&M Univ, College Stn, TX USA
关键词
Bias mitigation; Limitied annotation;
D O I
10.1007/978-3-031-43415-0_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing work on fairness modeling commonly assumes that sensitive attributes for all instances are fully available, which may not be true in many real-world applications due to the high cost of acquiring sensitive information. When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias. However, the skewed distribution across different sensitive groups preserves the skewness of the original dataset in the annotated subset, which leads to non-optimal bias mitigation. To tackle this challenge, we propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias. The proposed APOD integrates discrimination penalization with active instance selection to efficiently utilize the limited annotation budget, and it is theoretically proved to be capable of bounding the algorithmic bias. According to the evaluation on five benchmark datasets, APOD outperforms the state-of-the-arts baseline methods under the limited annotation budget, and shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited. The source code of the proposed method is available at: https://github.com/guanchuwang/APOD-fairness.
引用
收藏
页码:241 / 258
页数:18
相关论文
共 50 条
  • [31] Auditing Algorithmic Bias on Twitter
    Bartley, Nathan
    Abeliuk, Andres
    Ferrara, Emilio
    Lerman, Kristina
    PROCEEDINGS OF THE 13TH ACM WEB SCIENCE CONFERENCE, WEBSCI 2021, 2020, : 65 - 73
  • [32] Public opinion and Algorithmic bias
    Sirbu, Alina
    Giannotti, Fosca
    Pedreschi, Dino
    Kertesz, Janos
    ERCIM NEWS, 2019, (116): : 15 - 16
  • [33] Algorithmic Bias in Autonomous Systems
    Danks, David
    London, Alex John
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4691 - 4697
  • [34] Data and Algorithmic Bias in the Web
    Baeza-Yates, Ricardo
    PROCEEDINGS OF THE 2016 ACM WEB SCIENCE CONFERENCE (WEBSCI'16), 2016, : 1 - 1
  • [35] USACM on Algorithmic Bias, Accountability
    不详
    COMMUNICATIONS OF THE ACM, 2017, 60 (04) : 14 - 14
  • [36] An Empirical Study on Algorithmic Bias
    Sen, Sajib
    Dasgupta, Dipankar
    Gupta, Kishor Datta
    2020 IEEE 44TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2020), 2020, : 1189 - 1194
  • [37] Transcript assembly and annotations: Bias and adjustment
    Zhang, Qimin
    Shao, Mingfu
    PLOS COMPUTATIONAL BIOLOGY, 2023, 19 (12)
  • [38] Are algorithmic bias claims supported?
    Messing, Solomon
    SCIENCE, 2023, 381 (6665)
  • [39] A Genealogical Approach to Algorithmic Bias
    Ziosi, Marta
    Watson, David
    Floridi, Luciano
    MINDS AND MACHINES, 2024, 34 (02)
  • [40] Racial Bias in Algorithmic IP
    Burk, Dan L.
    MINNESOTA LAW REVIEW, 2022, 106 (01) : 270 - 300