Mitigating Algorithmic Bias with Limited Annotations

被引:0
|
作者
Wang, Guanchu [1 ]
Du, Mengnan [2 ]
Liu, Ninghao [3 ]
Zou, Na [4 ]
Hu, Xia [1 ]
机构
[1] Rice Univ, Houston, TX 77005 USA
[2] New Jersey Inst Technol, Newark, NJ 07102 USA
[3] Univ Georgia, Athens, GA 30602 USA
[4] Texas A&M Univ, College Stn, TX USA
关键词
Bias mitigation; Limitied annotation;
D O I
10.1007/978-3-031-43415-0_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing work on fairness modeling commonly assumes that sensitive attributes for all instances are fully available, which may not be true in many real-world applications due to the high cost of acquiring sensitive information. When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias. However, the skewed distribution across different sensitive groups preserves the skewness of the original dataset in the annotated subset, which leads to non-optimal bias mitigation. To tackle this challenge, we propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias. The proposed APOD integrates discrimination penalization with active instance selection to efficiently utilize the limited annotation budget, and it is theoretically proved to be capable of bounding the algorithmic bias. According to the evaluation on five benchmark datasets, APOD outperforms the state-of-the-arts baseline methods under the limited annotation budget, and shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited. The source code of the proposed method is available at: https://github.com/guanchuwang/APOD-fairness.
引用
收藏
页码:241 / 258
页数:18
相关论文
共 50 条
  • [41] Mitigating greenhouse: Limited time, limited options
    Moriarty, Patrick
    Honnery, Damon
    ENERGY POLICY, 2008, 36 (04) : 1251 - 1256
  • [42] International Workshop on Algorithmic Bias in Search and Recommendation (BIAS)
    Bellogin, Alejandro
    Boratto, Ludovico
    Kleanthous, Styliani
    Lex, Elisabeth
    Malloci, Francesca Maridina
    Marras, Mirko
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 3033 - 3035
  • [43] Inside the Black Box: Detecting and Mitigating Algorithmic Bias Across Racialized Groups in College Student-Success Prediction
    Gandara, Denisa
    Anahideh, Hadis
    Ison, Matthew P.
    Picchiarini, Lorenzo
    AERA OPEN, 2024, 10
  • [44] Mitigating Myside Bias in Argumentation
    Christensen-Branum, Lezlie
    Strong, Ashley
    Jones, Cindy D'On
    JOURNAL OF ADOLESCENT & ADULT LITERACY, 2019, 62 (04) : 435 - 445
  • [45] Mitigating Implicit Bias as a Leader
    JOM, 2019, 71 : 2152 - 2155
  • [46] Mitigating the Bias in Empathy Detection
    Hinduja, Saurabh
    2019 8TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2019, : 60 - 64
  • [47] Mitigating Implicit Bias as a Leader
    Clark, Blythe G.
    Underwood, Olivia D.
    JOM, 2019, 71 (07) : 2152 - 2155
  • [48] COMPUTATIONAL THINKING WITHOUT ALGORITHMIC BIAS
    Smith, Julie M.
    12TH INTERNATIONAL CONFERENCE OF EDUCATION, RESEARCH AND INNOVATION (ICERI2019), 2019, : 7577 - 7581
  • [49] Data's Impact on Algorithmic Bias
    Shin, Donghee
    Shin, Emily
    COMPUTER, 2023, 56 (06) : 90 - 94
  • [50] Assessing and addressing algorithmic bias in practice
    Cramer H.
    Garcia-Gathright J.
    Springer A.
    Reddy S.
    2018, Association for Computing Machinery (25) : 58 - 63