Implicit bias in large language models: Experimental proof and implications for education

被引:2
|
作者
Warr, Melissa [1 ]
Oster, Nicole Jakubczyk [2 ]
Isaac, Roger [1 ]
机构
[1] New Mexico State Univ, POB 30001, Las Cruces, NM 88003 USA
[2] Arizona State Univ, Tempe, AZ USA
关键词
Generative AI; large language models; critical technology studies; systemic bias; systemic inequity; ACHIEVEMENT GAP; SCHOOL; IDENTITY;
D O I
10.1080/15391523.2024.2395295
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
We provide experimental evidence of implicit racial bias in a large language model (specifically ChatGPT 3.5) in the context of an educational task and discuss implications for the use of these tools in educational contexts. Specifically, we presented ChatGPT with identical student writing passages alongside various descriptions of student demographics, including race, socioeconomic status, and school type. Results indicate that when directly prompted to consider race, the model produced higher overall scores than responses to a control prompt, but scores given to student descriptors of Black and White were not significantly different. However, this result belied a subtler form of prejudice that was statistically significant when racial indicators were implied rather than explicitly stated. Additionally, our investigation uncovered subtle sequence effects that suggest the model is more likely to illustrate bias when variables change within a single chat. The evidence indicates that despite the implementation of guardrails by developers, biases are profoundly embedded in ChatGPT, reflective of both the training data and societal biases at large. While overt biases can be addressed to some extent, the more ingrained implicit biases present a greater challenge for the application of these technologies in education. It is critical to develop an understanding of the bias embedded in these models and how this bias presents itself in educational contexts before using LLMs to develop personalized learning tools.
引用
收藏
页数:26
相关论文
共 50 条
  • [41] Assessing the Risk of Bias in Randomized Clinical Trials With Large Language Models
    Lai, Honghao
    Ge, Long
    Sun, Mingyao
    Pan, Bei
    Huang, Jiajie
    Hou, Liangying
    Yang, Qiuyu
    Liu, Jiayi
    Liu, Jianing
    Ye, Ziying
    Xia, Danni
    Zhao, Weilong
    Wang, Xiaoman
    Liu, Ming
    Talukdar, Jhalok Ronjan
    Tian, Jinhui
    Yang, Kehu
    Estill, Janne
    JAMA NETWORK OPEN, 2024, 7 (05) : E2412687
  • [42] Understanding the Effect of Model Compression on Social Bias in Large Language Models
    Goncalves, Gustavo
    Strubell, Emma
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 2663 - 2675
  • [43] Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
    Rytting, Christopher Michael
    Wingate, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [44] Neurobiology of Implicit and Explicit Bias: Implications for Clinicians
    Reihl, Kristina M.
    Hurley, Robin A.
    Taber, Katherine H.
    JOURNAL OF NEUROPSYCHIATRY AND CLINICAL NEUROSCIENCES, 2015, 27 (04) : 248 - 253
  • [45] The Neuroanatomy of Implicit and Explicit Bias; Implications for Clinicians
    Reihl, Kristina M.
    Hurley, Robin A.
    Taber, Katherine H.
    JOURNAL OF NEUROPSYCHIATRY AND CLINICAL NEUROSCIENCES, 2015, 27 (02) : E186 - E186
  • [46] Clinical Notes as Narratives: Implications for Large Language Models in Healthcare
    Brender, Teva D.
    Celi, Leo A.
    Cobert, Julien M.
    JOURNAL OF GENERAL INTERNAL MEDICINE, 2024, : 687 - 689
  • [47] Implications of Large Language Models for Quality and Efficiency of Neurologic Care
    Moura, Lidia
    Jones, David T.
    Sheikh, Irfan S.
    Murphy, Shawn
    Kalfin, Michael
    Kummer, Benjamin R.
    Weathers, Allison L.
    Grinspan, Zachary M.
    Silsbee, Heather M.
    Jones Jr, Lyell K.
    Patel, Anup D.
    NEUROLOGY, 2024, 102 (11) : e209497
  • [48] Potential applications and implications of large language models in primary care
    Andrew, Albert
    FAMILY MEDICINE AND COMMUNITY HEALTH, 2024, 12 (SUPPL_1)
  • [49] On the Interaction with Large Language Models for Web Accessibility: Implications and Challenges
    Delnevo, Giovanni
    Andruccioli, Manuel
    Mirri, Silvia
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024,
  • [50] Ethics, Governance, and User Mental Models for Large Language Models in Computing Education
    Zhou, Kyrie Zhixuan
    Kilhoffer, Zachary
    Sanfilippo, Madelyn Rose
    Underwood, Ted
    Gumusel, Ece
    Wei, Mengyi
    Choudhry, Abhinav
    Xiong, Jinjun
    XRDS: Crossroads, 2024, 31 (01): : 46 - 51