SayCanPay: Heuristic Planning with Large Language Models Using Learnable Domain Knowledge

被引:0
|
作者
Hazra, Rishi [1 ]
Dos Martires, Pedro Zuidberg [1 ]
De Raedt, Luc [1 ,2 ]
机构
[1] Orebro Univ, Ctr Appl Autonomous Sensor Syst AASS, Orebro, Sweden
[2] Katholieke Univ Leuven, Leuven, Belgium
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
引用
收藏
页码:20123 / 20133
页数:11
相关论文
共 50 条
  • [41] Comparing Heuristic Rules and Masked Language Models for Entity Alignment in the Literature Domain
    Piche, Dominique
    Font, Ludovic
    Zouaq, Amal
    Gagnon, Michel
    ACM JOURNAL ON COMPUTING AND CULTURAL HERITAGE, 2023, 16 (03):
  • [42] Knowledge Graph Enhancement for Improved Natural Language Health Question Answering using Large Language Models
    Jamil, Hasan M.
    Oduro-Afriyie, Joel
    SCIENTIFIC AND STATISTICAL DATABASE MANAGEMENT 36TH INTERNATIONAL CONFERENCE, SSDBM 2024, 2024,
  • [43] Temporal Knowledge Graph Link Prediction Using Synergized Large Language Models and Temporal Knowledge Graphs
    Chen, Yao
    Shen, Yuming
    NEURAL COMPUTING FOR ADVANCED APPLICATIONS, NCAA 2024, PT III, 2025, 2183 : 33 - 45
  • [44] Benchmarking Biomedical Relation Knowledge in Large Language Models
    Zhang, Fenghui
    Yang, Kuo
    Zhao, Chenqian
    Li, Haixu
    Dong, Xin
    Tian, Haoyu
    Zhou, Xuezhong
    BIOINFORMATICS RESEARCH AND APPLICATIONS, PT II, ISBRA 2024, 2024, 14955 : 482 - 495
  • [45] Updating knowledge in Large Language Models: an Empirical Evaluation
    Marinelli, Alberto Roberto
    Carta, Antonio
    Passaro, Lucia C.
    IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS 2024, IEEE EAIS 2024, 2024, : 289 - 296
  • [46] A Planning Domain Definition Language Generator, Interpreter, and Knowledge Base for Efficient Automated Planning
    Tagliapietra, Luca
    Tosello, Elisa
    Pagello, Enrico
    Menegatti, Emanuele
    INTELLIGENT AUTONOMOUS SYSTEMS 16, IAS-16, 2022, 412 : 563 - 579
  • [47] ALCUNA: Large Language Models Meet New Knowledge
    Yin, Xunjian
    Huang, Baizhou
    Wan, Xiaojun
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 1397 - 1414
  • [48] Systematic Assessment of Factual Knowledge in Large Language Models
    Luo, Linhao
    Thuy-Trang Vu
    Phung, Dinh
    Haffari, Gholamreza
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13272 - 13286
  • [49] Knowledge of cultural moral norms in large language models
    Ramezani, Aida
    Xu, Yang
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 428 - 446
  • [50] SKILL: Structured Knowledge Infusion for Large Language Models
    Moiseev, Fedor
    Dong, Zhe
    Alfonseca, Enrique
    Jaggi, Martin
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1581 - 1588