Large Language Models Demonstrate the Potential of Statistical Learning in Language

被引:36
|
作者
Contreras Kallens, Pablo [1 ]
Kristensen-McLachlan, Ross Deans [2 ,3 ,4 ]
Christiansen, Morten H. [1 ,3 ,4 ,5 ,6 ]
机构
[1] Cornell Univ, Dept Psychol, Ithaca, NY USA
[2] Aarhus Univ, Ctr Humanities Comp, Aarhus, Denmark
[3] Aarhus Univ, Interacting Minds Ctr, Aarhus, Denmark
[4] Aarhus Univ, Sch Commun & Culture, Aarhus, Denmark
[5] Haskins Labs Inc, New Haven, CT USA
[6] Cornell Univ, Dept Psychol, 228 Uris Hall, Ithaca, NY 14853 USA
关键词
Large language models; Artificial intelligence; Language acquisition; Statistical learning; Grammar; Innateness; Linguistic experience; PRINCIPLES;
D O I
10.1111/cogs.13256
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language-especially those involving computational modeling-have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the computational tools to determine empirically how much of the human language ability can be acquired from linguistic experience. LLMs are sophisticated deep learning architectures trained on vast amounts of natural language data, enabling them to perform an impressive range of linguistic tasks. We argue that, despite their clear semantic and pragmatic limitations, LLMs have already demonstrated that human-like grammatical language can be acquired without the need for a built-in grammar. Thus, while there is still much to learn about how humans acquire and use language, LLMs provide full-fledged computational models for cognitive scientists to empirically evaluate just how far statistical learning might take us in explaining the full complexity of human language.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] The Potential of Large Language Models for Radiology Report Simplification and Translations
    Tripathi, Satvik
    Dako, Farouk
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2024, 21 (12) : 1896 - 1897
  • [42] Metamorphic Malware Evolution: The Potential and Peril of Large Language Models
    Madani, Pooria
    2023 5TH IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS, TPS-ISA, 2023, : 74 - 81
  • [43] Potential applications and implications of large language models in primary care
    Andrew, Albert
    FAMILY MEDICINE AND COMMUNITY HEALTH, 2024, 12 (SUPPL_1)
  • [44] The potential of Large Language Models for social robots in special education
    Voultsiou, Evdokia
    Vrochidou, Eleni
    Moussiades, Lefteris
    Papakostas, George A.
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2025,
  • [45] Efficacy of large language models and their potential in Obstetrics and Gynecology education
    Eoh, Kyung Jin
    Kwon, Gu Yeun
    Lee, Eun Jin
    Lee, Joonho
    Lee, Inha
    Kim, Young Tae
    Nam, Eun Ji
    OBSTETRICS & GYNECOLOGY SCIENCE, 2024, 67 (06) : 550 - 556
  • [46] Exploring student perceptions of language learning affordances of Large Language Models: A Q methodology study
    Li, Ke
    Lun, Lulu
    Hu, Pingping
    EDUCATION AND INFORMATION TECHNOLOGIES, 2025,
  • [47] Low-Parameter Federated Learning with Large Language Models
    Jiang, Jingang
    Jiang, Haiqi
    Ma, Yuhan
    Liu, Xiangyang
    Fan, Chenyou
    WEB INFORMATION SYSTEMS AND APPLICATIONS, WISA 2024, 2024, 14883 : 319 - 330
  • [48] TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models
    Gekhman, Zorik
    Herzig, Jonathan
    Aharoni, Roee
    Elkind, Chen
    Szpektor, Idan
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 2053 - 2070
  • [49] Scaffolding learning: From specific to generic with large language models
    Yin, David S.
    Yin, Xiaoxin
    PLOS ONE, 2024, 19 (09):
  • [50] TITANIC: Towards Production Federated Learning with Large Language Models
    Su, Ningxin
    Hu, Chenghao
    Li, Baochun
    Li, Bo
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 611 - 620