Evaluating Social Bias in Code Generation Models

被引:0
|
作者
Ling, Lin [1 ]
机构
[1] Concordia Univ, Montreal, PQ, Canada
关键词
Code Generation Models; Social Bias; AI Ethics;
D O I
10.1145/3663529.3664462
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The functional correctness of Code Generation Models (CLMs) has been well-studied, but their social bias has not. This study aims to fill this gap by creating an evaluation set for human-centered tasks and empirically assessing social bias in CLMs. We introduce a novel evaluation framework to assess biases in CLM-generated code, using differential testing to determine if the code exhibits biases towards specific demographic groups in social issues. Our core contributions are (1) a dataset for evaluating social problems and (2) a testing framework to quantify CLM fairness in code generation, promoting ethical AI development.
引用
收藏
页码:695 / 697
页数:3
相关论文
共 50 条
  • [31] Efficient code generation from SHIM models
    Edwards, Stephen A.
    Tardieu, Olivier
    ACM SIGPLAN NOTICES, 2006, 41 (07) : 125 - 134
  • [32] Code Difference Guided Adversarial Example Generation for Deep Code Models
    Tian, Zhao
    Chen, Junjie
    Jin, Zhi
    2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 850 - 862
  • [33] Uncovering and Quantifying Social Biases in Code Generation
    Liu, Yan
    Chen, Xiaokang
    Gao, Yan
    Su, Zhe
    Zhang, Fengji
    Zan, Daoguang
    Lou, Jian-Guang
    Chen, Pin-Yu
    Ho, Tsung-Yi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in Code Models
    Yang, Zhou
    Zhao, Zhipeng
    Wang, Chenyu
    Shi, Jieke
    Kim, Dongsun
    Han, Donggyun
    Lo, David
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (12) : 3290 - 3306
  • [35] Evaluating and Enhancing the Robustness of Code Pre-trained Models through Structure-Aware Adversarial Samples Generation
    Chen, Nuo
    Sun, Qiushi
    Wang, Jianing
    Gao, Ming
    Li, Xiaoli
    Li, Xiang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 14857 - 14873
  • [36] Evaluating and Mitigating Gender Bias in Generative Large Language Models
    Zhou, H.
    Inkpen, D.
    Kantarci, B.
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2024, 19 (06)
  • [37] EVALUATING MODELS OF SEA-STATE BIAS IN SATELLITE ALTIMETRY
    GLAZMAN, RE
    GREYSUKH, A
    ZLOTNICKI, V
    JOURNAL OF GEOPHYSICAL RESEARCH-OCEANS, 1994, 99 (C6) : 12581 - 12591
  • [38] Retargetable generation of code selectors for HDL processor models
    Leupers, R
    Marwedel, P
    EUROPEAN DESIGN & TEST CONFERENCE - ED&TC 97, PROCEEDINGS, 1997, : 140 - 144
  • [39] RateML: A Code Generation Tool for Brain Network Models
    van der Vlag, Michiel
    Woodman, Marmaduke
    Fousek, Jan
    Diaz-Pier, Sandra
    Martin, Aaron Perez
    Jirsa, Viktor
    Morrison, Abigail
    FRONTIERS IN NETWORK PHYSIOLOGY, 2022, 2
  • [40] CODE GENERATION FOR CSM/ECSM MODELS IN COSMA ENVIRONMENT
    Grabski, Waldemar
    Nowacki, Michal
    COMPUTER SCIENCE-AGH, 2007, 8 : 49 - 59