Large language models and humans converge in judging public figures' personalities

被引:0
|
作者
Cao, Xubo [1 ]
Kosinski, Michal [1 ]
机构
[1] Stanford Univ, Grad Sch Business, Stanford, CA 94305 USA
来源
PNAS NEXUS | 2024年 / 3卷 / 10期
关键词
personality perception; zero-shot predictions; large language models; AI;
D O I
10.1093/pnasnexus/pgae418
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
ChatGPT-4 and 600 human raters evaluated 226 public figures' personalities using the Ten-Item Personality Inventory. The correlation between ChatGPT-4 and aggregate human ratings ranged from r = 0.76 to 0.87, outperforming the models specifically trained to make such predictions. Notably, the model was not provided with any training data or feedback on its performance. We discuss the potential explanations and practical implications of ChatGPT-4's ability to mimic human responses accurately.
引用
收藏
页数:4
相关论文
共 50 条
  • [31] Balanced and Explainable Social Media Analysis for Public Health with Large Language Models
    Jiang, Yan
    Qiu, Ruihong
    Zhang, Yi
    Zhang, Peng-Fei
    DATABASES THEORY AND APPLICATIONS, ADC 2023, 2024, 14386 : 73 - 86
  • [32] Accuracy, readability, and understandability of large language models for prostate cancer information to the public
    Hershenhouse, Jacob S.
    Mokhtar, Daniel
    Eppler, Michael B.
    Rodler, Severin
    Ramacciotti, Lorenzo Storino
    Ganjavi, Conner
    Hom, Brian
    Davis, Ryan J.
    Tran, John
    Russo, Giorgio Ivan
    Cocci, Andrea
    Abreu, Andre
    Gill, Inderbir
    Desai, Mihir
    Cacciamani, Giovanni E.
    PROSTATE CANCER AND PROSTATIC DISEASES, 2024,
  • [33] Judging facts, judging norms: Training machine learning models to judge humans requires a modified approach to labeling data
    Balagopalan, Aparna
    Madras, David
    Yang, David H.
    Hadfield-Menell, Dylan
    Hadfield, Gillian K.
    Ghassemi, Marzyeh
    SCIENCE ADVANCES, 2023, 9 (19)
  • [34] Large Language Models in der WissenschaftLarge language models in science
    Karl-Friedrich Kowalewski
    Severin Rodler
    Die Urologie, 2024, 63 (9) : 860 - 866
  • [35] Comparing Humans and Large Language Models on an Experimental Protocol Inventory for Theory of Mind Evaluation (EPITOME)
    Jones, Cameron R.
    Trott, Sean
    Bergen, Benjamin
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 803 - 819
  • [36] Amplifying the Anomaly: How Humans Choose Unproven Options and Large Language Models Avoid Them
    Brandt, Anthony K.
    CREATIVITY RESEARCH JOURNAL, 2024,
  • [37] Eysenck Personality Questionnaire: A Comparative Study of Humans and Large Language Models Through Repeated Administrations
    Antal, Margit
    Beder, Norbert
    ACTA UNIVERSITATIS SAPIENTIAE INFORMATICA, 2024, 16 (02)
  • [38] The Importance of Understanding Language in Large Language Models
    Youssef, Alaa
    Stein, Samantha
    Clapp, Justin
    Magnus, David
    AMERICAN JOURNAL OF BIOETHICS, 2023, 23 (10): : 6 - 7
  • [39] Dissociating language and thought in large language models
    Mahowald, Kyle
    Ivanova, Anna A.
    Blank, Idan A.
    Kanwisher, Nancy
    Tenenbaum, Joshua B.
    Fedorenko, Evelina
    TRENDS IN COGNITIVE SCIENCES, 2024, 28 (06) : 517 - 540
  • [40] On the creativity of large language models
    Franceschelli, Giorgio
    Musolesi, Mirco
    AI & SOCIETY, 2024,