The Journey of Language Models in Understanding Natural Language

被引:0
|
作者
Liu, Yuanrui [1 ,2 ]
Zhou, Jingping [3 ]
Sang, Guobiao [2 ]
Huang, Ruilong [1 ]
Zhao, Xinzhe [1 ]
Fang, Jintao [2 ]
Wang, Tiexin [1 ]
Li, Bohan [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Artificial Intelligence & Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Beijing Shenzhou Aerosp Software Technol Co Ltd, Beijing 100094, Peoples R China
[3] Beijing Inst Telemetry, Beijing 100083, Peoples R China
关键词
Artificial intelligence; Natural language understanding; Vector space model; Topic model; Neural network; Deep learning; VECTOR-SPACE-MODEL;
D O I
10.1007/978-981-97-7707-5_29
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the Turing Test was proposed in the 1950s, humanity began exploring artificial intelligence, with an aim to bridge the interaction gap between machines and human language. This exploration enables machines to comprehend how humans acquire, produce, and understand language, as well as the relationship between linguistic expression and the world. The paper explores the basic principles of natural language representation, the formalization of natural language, and the modeling methods of language models. The paper analyzes, summarizes and compares the mainstream technologies and methods, including vector space-based, topic model-based, graph-based, and neural network-based approaches. And how to improve the development trend and direction of language model understanding ability is predicted and further discussed.
引用
收藏
页码:331 / 363
页数:33
相关论文
共 50 条
  • [1] MODELS OF NATURAL-LANGUAGE UNDERSTANDING
    BATES, M
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1995, 92 (22) : 9977 - 9982
  • [2] Shortcut Learning of Large Language Models in Natural Language Understanding
    Du, Mengnan
    He, Fengxiang
    Zou, Na
    Tao, Dacheng
    Hu, Xia
    COMMUNICATIONS OF THE ACM, 2024, 67 (01) : 110 - 120
  • [3] Understanding natural language: Potential application of large language models to ophthalmology
    Yang, Zefeng
    Wang, Deming
    Zhou, Fengqi
    Song, Diping
    Zhang, Yinhang
    Jiang, Jiaxuan
    Kong, Kangjie
    Liu, Xiaoyi
    Qiao, Yu
    Chang, Robert T.
    Han, Ying
    Li, Fei
    Tham, Clement C.
    Zhang, Xiulan
    ASIA-PACIFIC JOURNAL OF OPHTHALMOLOGY, 2024, 13 (04):
  • [4] Fertility models for statistical natural language understanding
    Della Pietra, S
    Epstein, M
    Roukos, S
    Ward, T
    35TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 8TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, 1997, : 168 - 173
  • [5] Canary Extraction in Natural Language Understanding Models
    Parikh, Rahil
    Dupuy, Christophe
    Gupta, Rahul
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 552 - 560
  • [6] The journey from natural language processing to large language models: key insights for radiologists
    Salvatore Claudio Fanni
    Lorenzo Tumminello
    Valentina Formica
    Francesca Pia Caputo
    Gayane Aghakhanyan
    Ilaria Ambrosini
    Roberto Francischello
    Lorenzo Faggioni
    Dania Cioni
    Emanuele Neri
    Journal of Medical Imaging and Interventional Radiology, 11 (1):
  • [7] The language of thought and natural language understanding
    Knowles, J
    ANALYSIS, 1998, 58 (04) : 264 - 272
  • [8] Reliable Natural Language Understanding with Large Language Models and Answer Set Programming
    Rajasekharan, Abhiramon
    Zeng, Yankai
    Padalkar, Parth
    Gupta, Gopal
    Electronic Proceedings in Theoretical Computer Science, EPTCS, 2023, 385 : 274 - 287
  • [9] Effect of Visual Extensions on Natural Language Understanding in Vision-and-Language Models
    Iki, Taichi
    Aizawa, Akiko
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2189 - 2196
  • [10] Reliable Natural Language Understanding with Large Language Models and Answer Set Programming
    Rajasekharan, Abhiramon
    Zeng, Yankai
    Padalkar, Parth
    Gupta, Gopal
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2023, (385): : 274 - 287