Do large language models "understand" their knowledge?

被引:0
|
作者
Venkatasubramanian, Venkat [1 ]
机构
[1] Columbia Univ, Dept Chem Engn, Complex Resilient Intelligent Syst Lab, New York, NY 10027 USA
关键词
Knowledge representation; LLM; Industrial revolution 4.0; LKM; Transformers; PROCESS FAULT-DETECTION; QUANTITATIVE MODEL; PART I; FRAMEWORK; DESIGN; SYSTEM;
D O I
10.1002/aic.18661
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Large language models (LLMs) are often criticized for lacking true "understanding" and the ability to "reason" with their knowledge, being seen merely as autocomplete engines. I suggest that this assessment might be missing a nuanced insight. LLMs do develop a kind of empirical "understanding" that is "geometry"-like, which is adequate for many applications. However, this "geometric" understanding, built from incomplete and noisy data, makes them unreliable, difficult to generalize, and lacking in inference capabilities and explanations. To overcome these limitations, LLMs should be integrated with an "algebraic" representation of knowledge that includes symbolic AI elements used in expert systems. This integration aims to create large knowledge models (LKMs) grounded in first principles that can reason and explain, mimicking human expert capabilities. Furthermore, we need a conceptual breakthrough, such as the transformation from Newtonian mechanics to statistical mechanics, to create a new science of LLMs.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Do Large Language Models Understand Us?
    Aguera y Arcas, Blaise
    DAEDALUS, 2022, 151 (02) : 183 - 197
  • [2] Do multimodal large language models understand welding?
    Khvatskii, Grigorii
    Lee, Yong Suk
    Angst, Corey
    Gibbs, Maria
    Landers, Robert
    Chawla, Nitesh V.
    INFORMATION FUSION, 2025, 120
  • [3] Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with the SOCKET Benchmark
    Choi, Minje
    Pei, Jiaxin
    Kumar, Sagar
    David, Shua
    Jurgens, Jurgen
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 11370 - 11403
  • [4] Do Large Language Models Understand Chemistry? A Conversation with ChatGPT
    Nascimento, Cayque Monteiro Castro
    Pimentel, Andre Silva
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2023, 63 (06) : 1649 - 1655
  • [5] How Well Do Large Language Models Understand Tables in Materials Science?
    Circi, Defne
    Khalighinejad, Ghazal
    Chen, Anlan
    Dhingra, Bhuwan
    Brinson, L. Catherine
    INTEGRATING MATERIALS AND MANUFACTURING INNOVATION, 2024, 13 (03) : 669 - 687
  • [6] Large language models can better understand knowledge graphs than we thought
    Dai, Xinbang
    Hua, Yuncheng
    Wu, Tongtong
    Sheng, Yang
    Ji, Qiu
    Qi, Guilin
    KNOWLEDGE-BASED SYSTEMS, 2025, 312
  • [7] Can large language models understand molecules?
    Sadeghi, Shaghayegh
    Bui, Alan
    Forooghi, Ali
    Lu, Jianguo
    Ngom, Alioune
    BMC BIOINFORMATICS, 2024, 25 (01):
  • [8] Using Large Language Models to Understand Telecom Standards
    Karapantelakis, Athanasios
    Thakur, Mukesh
    Nikou, Alexandros
    Moradi, Farnaz
    Olrog, Christian
    Gaim, Fitsum
    Holm, Henrik
    Nimara, Doumitrou Daniil
    Huang, Vincent
    2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024, 2024, : 440 - 446
  • [9] PointLLM: Empowering Large Language Models to Understand Point Clouds
    Xu, Runsen
    Wang, Xiaolong
    Wang, Tai
    Chen, Yilun
    Pang, Jiangmiao
    Lin, Dahua
    COMPUTER VISION - ECCV 2024, PT XXV, 2025, 15083 : 131 - 147
  • [10] Quantifying Domain Knowledge in Large Language Models
    Sayenju, Sudhashree
    Aygun, Ramazan
    Franks, Bill
    Johnston, Sereres
    Lee, George
    Choi, Hansook
    Modgil, Girish
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 193 - 194