Large Language Models and the Reverse Turing Test

被引:47
|
作者
Sejnowski, Terrence J. [1 ,2 ]
机构
[1] Salk Inst Biol Studies, La Jolla, CA 92093 USA
[2] Univ Calif San Diego, Div Biol Sci, La Jolla, CA 92037 USA
关键词
LEARNING ALGORITHM; NEUROSCIENCE;
D O I
10.1162/neco_a_01563
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have been transformative. They are pretrained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and, more recently, LaMDA, both of them LLMs, can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a reverse Turing test. If so, then by studying interviews, we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable, they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function.
引用
收藏
页码:309 / 342
页数:34
相关论文
共 50 条
  • [1] PessimalPrint: a reverse Turing test
    Henry S. Baird
    Allison L. Coates
    Richard J. Fateman
    International Journal on Document Analysis and Recognition, 2003, 5 (2) : 158 - 163
  • [2] Large Language Models and the Extended Church-Turing Thesis
    Manea, Florin
    Pighizzini, Giovanni
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2024, (407):
  • [3] Pessimal print: A reverse turing test
    Coates, AL
    Baird, HS
    Fateman, RJ
    SIXTH INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION, PROCEEDINGS, 2001, : 1154 - +
  • [4] Machine vision: an aid in reverse Turing test
    Putchala, Santosh
    Agarwal, Nikhil
    AI & SOCIETY, 2011, 26 (01) : 95 - 101
  • [5] Large language models must serve clinicians, not the reverse
    Armitage, Richard
    LANCET INFECTIOUS DISEASES, 2024, 24 (05): : 453 - 454
  • [6] LLMs, Turing tests and Chinese rooms: the prospects for meaning in large language models
    Borg, Emma
    INQUIRY-AN INTERDISCIPLINARY JOURNAL OF PHILOSOPHY, 2025,
  • [7] A Reverse Turing Like Test for Quad-copters
    Traboulsi, Ahmad
    Barbeau, Michel
    17TH ANNUAL INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING IN SENSOR SYSTEMS (DCOSS 2021), 2021, : 351 - 358
  • [8] An Automated Reverse Turing Test Using Facial Expressions
    Vimina, E. R.
    HSI: 2009 2ND CONFERENCE ON HUMAN SYSTEM INTERACTIONS, 2009, : 311 - 314
  • [9] On the Evaluation of Large Language Models in Unit Test Generation
    Yang, Lin
    Yang, Chen
    Gao, Shutao
    Wang, Weijing
    Wang, Bo
    Zhu, Qihao
    Chu, Xiao
    Zhou, Jianyi
    Liang, Guangtai
    Wang, Qianxiang
    Chen, Junjie
    Proceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024, : 1607 - 1619
  • [10] On the Evaluation of Large Language Models in Unit Test Generation
    Yang, Lin
    Yang, Chen
    Gao, Shutao
    Wang, Weijing
    Wang, Bo
    Zhu, Qihao
    Chu, Xiao
    Zhou, Jianyi
    Liang, Guangtai
    Wang, Qianxiang
    Chen, Junjie
    arXiv,