Large Language Models Are Neurosymbolic Reasoners

被引:0
|
作者
Fang, Meng [1 ,2 ]
Deng, Shilong [1 ]
Zhang, Yudi [2 ]
Shi, Zijing [3 ]
Chen, Ling [3 ]
Pechenizkiy, Mykola [2 ]
Wang, Jun [4 ]
机构
[1] Univ Liverpool, Liverpool, Merseyside, England
[2] Eindhoven Univ Technol, Eindhoven, Netherlands
[3] Univ Technol Sydney, Sydney, NSW, Australia
[4] UCL, London, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A wide range of real-world applications is characterized by their symbolic nature, necessitating a strong capability for symbolic reasoning. This paper investigates the potential application of Large Language Models (LLMs) as symbolic reasoners. We focus on text-based games, significant benchmarks for agents with natural language capabilities, particularly in symbolic tasks like math, map reading, sorting, and applying common sense in text-based worlds. To facilitate these agents, we propose an LLM agent designed to tackle symbolic challenges and achieve in-game objectives. We begin by initializing the LLM agent and informing it of its role. The agent then receives observations and a set of valid actions from the text-based games, along with a specific symbolic module. With these inputs, the LLM agent chooses an action and interacts with the game environments. Our experimental results demonstrate that our method significantly enhances the capability of LLMs as automated agents for symbolic reasoning, and our LLM agent is effective in text-based games involving symbolic tasks, achieving an average performance of 88% across all tasks.
引用
收藏
页码:17985 / 17993
页数:9
相关论文
共 50 条
  • [41] Einsatzmöglichkeiten von „large language models“ in der OnkologieApplications of large language models in oncology
    Chiara M. Loeffler
    Keno K. Bressem
    Daniel Truhn
    Die Onkologie, 2024, 30 (5) : 388 - 393
  • [42] Learning Neurosymbolic Generative Models via Program Synthesis
    Young, Halley
    Bastani, Osbert
    Naik, Mayur
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [43] From Large Language Models to Large Multimodal Models: A Literature Review
    Huang, Dawei
    Yan, Chuan
    Li, Qing
    Peng, Xiaojiang
    APPLIED SCIENCES-BASEL, 2024, 14 (12):
  • [44] A comprehensive survey of large language models and multimodal large models in medicine
    Xiao, Hanguang
    Zhou, Feizhong
    Liu, Xingyue
    Liu, Tianqi
    Li, Zhipeng
    Liu, Xin
    Huang, Xiaoxuan
    INFORMATION FUSION, 2025, 117
  • [45] Accelerating materials language processing with large language models
    Jaewoong Choi
    Byungju Lee
    Communications Materials, 5
  • [46] Natural language processing in the era of large language models
    Zubiaga, Arkaitz
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 6
  • [47] Understanding Telecom Language Through Large Language Models
    Bariah, Lina
    Zou, Hang
    Zhao, Qiyang
    Mouhouche, Belkacem
    Bader, Faouzi
    Debbah, Merouane
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6542 - 6547
  • [48] Fine Tuning of large language Models for Arabic Language
    Tamer, Ahmed
    Hassan, Al-Amir
    Ali, Asmaa
    Salah, Nada
    Medhat, Walaa
    2023 20TH ACS/IEEE INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND APPLICATIONS, AICCSA, 2023,
  • [49] Accelerating materials language processing with large language models
    Choi, Jaewoong
    Lee, Byungju
    COMMUNICATIONS MATERIALS, 2024, 5 (01)
  • [50] Generating Data for Symbolic Language with Large Language Models
    Ye, Jiacheng
    Li, Chengzu
    Kong, Lingpeng
    Yu, Tao
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 8418 - 8443