Perils and opportunities in using large language models in psychological research

被引:6
|
作者
Abdurahman, Suhaib [1 ,2 ]
Atari, Mohammad [3 ,6 ]
Karimi-Malekabadi, Farzan [1 ,2 ]
Xue, Mona J. [3 ]
Trager, Jackson [2 ]
Park, Peter S. [1 ,4 ]
Golazizian, Preni [2 ,5 ]
Omrani, Ali [2 ,5 ]
Dehghani, Morteza [1 ,2 ,5 ]
机构
[1] Univ Southern Calif, Dept Psychol, Los Angeles, CA 90089 USA
[2] Univ Southern Calif, Brain & Creat Inst, Los Angeles, CA 90089 USA
[3] Harvard Univ, Dept Human Evolutionary Biol, Cambridge, MA 02138 USA
[4] MIT, Dept Phys, Cambridge, MA 02139 USA
[5] Univ Southern Calif, Dept Comp Sci, Los Angeles, CA 90089 USA
[6] Univ Massachusetts Amherst, Dept Psychol & Brain Sci, Amherst, MA 01003 USA
来源
PNAS NEXUS | 2024年 / 3卷 / 07期
关键词
psychology; large language models; natural language processing; psychological diversity; psychological text analysis; SOCIAL-SCIENCE; INFORMATION; NEED; AI;
D O I
10.1093/pnasnexus/pgae245
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The emergence of large language models (LLMs) has sparked considerable interest in their potential application in psychological research, mainly as a model of the human psyche or as a general text-analysis tool. However, the trend of using LLMs without sufficient attention to their limitations and risks, which we rhetorically refer to as "GPTology", can be detrimental given the easy access to models such as ChatGPT. Beyond existing general guidelines, we investigate the current limitations, ethical implications, and potential of LLMs specifically for psychological research, and show their concrete impact in various empirical studies. Our results highlight the importance of recognizing global psychological diversity, cautioning against treating LLMs (especially in zero-shot settings) as universal solutions for text analysis, and developing transparent, open methods to address LLMs' opaque nature for reliable, reproducible, and robust inference from AI-generated data. Acknowledging LLMs' utility for task automation, such as text annotation, or to expand our understanding of human psychology, we argue for diversifying human samples and expanding psychology's methodological toolbox to promote an inclusive, generalizable science, countering homogenization, and over-reliance on LLMs.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Debiasing large language models: research opportunities
    Yogarajan, Vithya
    Dobbie, Gillian
    Keegan, Te Taka
    JOURNAL OF THE ROYAL SOCIETY OF NEW ZEALAND, 2025, 55 (02) : 372 - 395
  • [2] Large language models and the perils of their hallucinations
    Razvan Azamfirei
    Sapna R. Kudchadkar
    James Fackler
    Critical Care, 27
  • [3] Large language models and the perils of their hallucinations
    Azamfirei, Razvan
    Kudchadkar, Sapna R.
    Fackler, James
    CRITICAL CARE, 2023, 27 (01)
  • [4] The power and perils of large language models in haematology
    Glass, Jacob
    Elemento, Olivier
    BRITISH JOURNAL OF HAEMATOLOGY, 2024, 205 (06) : 2190 - 2192
  • [5] ChatGPT, Bard, and Large Language Models for Biomedical Research: Opportunities and Pitfalls
    Surendrabikram Thapa
    Surabhi Adhikari
    Annals of Biomedical Engineering, 2023, 51 : 2647 - 2651
  • [6] ChatGPT, Bard, and Large Language Models for Biomedical Research: Opportunities and Pitfalls
    Thapa, Surendrabikram
    Adhikari, Surabhi
    ANNALS OF BIOMEDICAL ENGINEERING, 2023, 51 (12) : 2647 - 2651
  • [7] How to harness natural language processing tools and large language models for psychological research
    Fischer, Ronald
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2024, 59 : 19 - 20
  • [8] Using large language models to generate silicon samples in consumer and marketing research: Challenges, opportunities, and guidelines
    Sarstedt, Marko
    Adler, Susanne J.
    Rau, Lea
    Schmitt, Bernd
    PSYCHOLOGY & MARKETING, 2024, 41 (06) : 1254 - 1270
  • [9] The perils and promises of fact-checking with large language models
    Quelle, Dorian
    Bovet, Alexandre
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [10] Large language models for qualitative research in software engineering: exploring opportunities and challenges
    Muneera Bano
    Rashina Hoda
    Didar Zowghi
    Christoph Treude
    Automated Software Engineering, 2024, 31