Performance and biases of Large Language Models in public opinion simulation

被引:3
|
作者
Qu, Yao [1 ]
Wang, Jue [1 ]
机构
[1] Nanyang Technol Univ, Sch Social Sci, Singapore, Singapore
来源
关键词
D O I
10.1057/s41599-024-03609-x
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
The rise of Large Language Models (LLMs) like ChatGPT marks a pivotal advancement in artificial intelligence, reshaping the landscape of data analysis and processing. By simulating public opinion, ChatGPT shows promise in facilitating public policy development. However, challenges persist regarding its worldwide applicability and bias across demographics and themes. Our research employs socio-demographic data from the World Values Survey to evaluate ChatGPT's performance in diverse contexts. Findings indicate significant performance disparities, especially when comparing countries. Models perform better in Western, English-speaking, and developed nations, notably the United States, in comparison to others. Disparities also manifest across demographic groups, showing biases related to gender, ethnicity, age, education, and social class. The study further uncovers thematic biases in political and environmental simulations. These results highlight the need to enhance LLMs' representativeness and address biases, ensuring their equitable and effective integration into public opinion research alongside conventional methodologies.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Opinion On Program Synthesis and Large Language Models
    Huttel, Hans
    COMMUNICATIONS OF THE ACM, 2025, 68 (01) : 33 - 35
  • [2] Benchmarking Cognitive Biases in Large Language Models as Evaluators
    Koo, Ryan
    Lee, Minhwa
    Raheja, Vipul
    Park, Jongin
    Kim, Zae Myung
    Kang, Dongyeop
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 517 - 545
  • [3] Biases in Large Language Models: Origins, Inventory, and Discussion
    Navigli, Roberto
    Conia, Simone
    Ross, Bjorn
    ACM JOURNAL OF DATA AND INFORMATION QUALITY, 2023, 15 (02):
  • [4] (Ir)rationality and cognitive biases in large language models
    Macmillan-Scott, Olivia
    Musolesi, Mirco
    ROYAL SOCIETY OPEN SCIENCE, 2024, 11 (06):
  • [5] Temperature Biases in Public Opinion Surveys
    Potoski, Matthew
    Urbatsch, R.
    Yu, Cindy
    WEATHER CLIMATE AND SOCIETY, 2015, 7 (02) : 192 - 196
  • [6] Confirmation and Specificity Biases in Large Language Models: An Explorative Study
    O'Leary, Daniel E.
    IEEE INTELLIGENT SYSTEMS, 2025, 40 (01) : 63 - 68
  • [7] Detecting implicit biases of large language models with Bayesian hypothesis testingDetecting Implicit Biases of Large Language Models...S. Si et al.
    Shijing Si
    Xiaoming Jiang
    Qinliang Su
    Lawrence Carin
    Scientific Reports, 15 (1)
  • [8] Politicians' Reading of Public Opinion and Its Biases
    Voessing, Konstantin
    Walgrave, Stefaan
    Soontjens, Karolin
    Sevenans, Julie
    PERSPECTIVES ON POLITICS, 2023, 21 (04) : 1512 - 1514
  • [9] A toolbox for surfacing health equity harms and biases in large language models
    Pfohl, Stephen R.
    Cole-Lewis, Heather
    Sayres, Rory
    Neal, Darlene
    Asiedu, Mercy
    Dieng, Awa
    Tomasev, Nenad
    Rashid, Qazi Mamunur
    Azizi, Shekoofeh
    Rostamzadeh, Negar
    Mccoy, Liam G.
    Celi, Leo Anthony
    Liu, Yun
    Schaekermann, Mike
    Walton, Alanna
    Parrish, Alicia
    Nagpal, Chirag
    Singh, Preeti
    Dewitt, Akeiylah
    Mansfield, Philip
    Prakash, Sushant
    Heller, Katherine
    Karthikesalingam, Alan
    Semturs, Christopher
    Barral, Joelle
    Corrado, Greg
    Matias, Yossi
    Smith-Loud, Jamila
    Horn, Ivor
    Singhal, Karan
    NATURE MEDICINE, 2024, 30 (12)
  • [10] Capturing Failures of Large Language Models via Human Cognitive Biases
    Jones, Erik
    Steinhardt, Jacob
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,