Large Language Models Demonstrate the Potential of Statistical Learning in Language

被引:36
|
作者
Contreras Kallens, Pablo [1 ]
Kristensen-McLachlan, Ross Deans [2 ,3 ,4 ]
Christiansen, Morten H. [1 ,3 ,4 ,5 ,6 ]
机构
[1] Cornell Univ, Dept Psychol, Ithaca, NY USA
[2] Aarhus Univ, Ctr Humanities Comp, Aarhus, Denmark
[3] Aarhus Univ, Interacting Minds Ctr, Aarhus, Denmark
[4] Aarhus Univ, Sch Commun & Culture, Aarhus, Denmark
[5] Haskins Labs Inc, New Haven, CT USA
[6] Cornell Univ, Dept Psychol, 228 Uris Hall, Ithaca, NY 14853 USA
关键词
Large language models; Artificial intelligence; Language acquisition; Statistical learning; Grammar; Innateness; Linguistic experience; PRINCIPLES;
D O I
10.1111/cogs.13256
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language-especially those involving computational modeling-have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the computational tools to determine empirically how much of the human language ability can be acquired from linguistic experience. LLMs are sophisticated deep learning architectures trained on vast amounts of natural language data, enabling them to perform an impressive range of linguistic tasks. We argue that, despite their clear semantic and pragmatic limitations, LLMs have already demonstrated that human-like grammatical language can be acquired without the need for a built-in grammar. Thus, while there is still much to learn about how humans acquire and use language, LLMs provide full-fledged computational models for cognitive scientists to empirically evaluate just how far statistical learning might take us in explaining the full complexity of human language.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Statistical Knowledge Assessment for Large Language Models
    Dong, Qingxiu
    Xu, Jingjing
    Kong, Lingpeng
    Sui, Zhifang
    Li, Lei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Generalization potential of large language models
    Mikhail Budnikov
    Anna Bykova
    Ivan P. Yamshchikov
    Neural Computing and Applications, 2025, 37 (4) : 1973 - 1997
  • [3] An Investigation of Applying Large Language Models to Spoken Language Learning
    Gao, Yingming
    Nuchged, Baorian
    Li, Ya
    Peng, Linkai
    APPLIED SCIENCES-BASEL, 2024, 14 (01):
  • [4] Shortcut Learning of Large Language Models in Natural Language Understanding
    Du, Mengnan
    He, Fengxiang
    Zou, Na
    Tao, Dacheng
    Hu, Xia
    COMMUNICATIONS OF THE ACM, 2024, 67 (01) : 110 - 120
  • [5] Understanding natural language: Potential application of large language models to ophthalmology
    Yang, Zefeng
    Wang, Deming
    Zhou, Fengqi
    Song, Diping
    Zhang, Yinhang
    Jiang, Jiaxuan
    Kong, Kangjie
    Liu, Xiaoyi
    Qiao, Yu
    Chang, Robert T.
    Han, Ying
    Li, Fei
    Tham, Clement C.
    Zhang, Xiulan
    ASIA-PACIFIC JOURNAL OF OPHTHALMOLOGY, 2024, 13 (04):
  • [6] Large Language Models in Ophthalmology: Potential and Pitfalls
    Yaghy, Antonio
    Yaghy, Maria
    Shields, Jerry A.
    Shields, Carol L.
    SEMINARS IN OPHTHALMOLOGY, 2024, 39 (04) : 289 - 293
  • [7] Large language models and their big bullshit potential
    Fisher, Sarah A.
    ETHICS AND INFORMATION TECHNOLOGY, 2024, 26 (04)
  • [8] The Potential of Large Language Models in Education: Applications and Challenges in the Learning of Language, Social Science, Health Care, and Science
    Zhou, Yalun
    Si, Mei
    Doll, Jacky
    HCI INTERNATIONAL 2024-LATE BREAKING POSTERS, HCII 2024, PT III, 2025, 2321 : 165 - 172
  • [9] Federated and edge learning for large language models
    Piccialli, Francesco
    Chiaro, Diletta
    Qi, Pian
    Bellandi, Valerio
    Damiani, Ernesto
    INFORMATION FUSION, 2025, 117
  • [10] Tool learning with large language models: a survey
    Qu, Changle
    Dai, Sunhao
    Wei, Xiaochi
    Cai, Hengyi
    Wang, Shuaiqiang
    Yin, Dawei
    Xu, Jun
    Wen, Ji-rong
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (08)