From Static to Dynamic: Knowledge Metabolism for Large Language Models

被引:0
|
作者
Du, Mingzhe [1 ,2 ]
Luu, Anh Tuan [1 ]
Ji, Bin [2 ]
Ng, See-Kiong [2 ]
机构
[1] Nanyang Technol Univ, Singapore, Singapore
[2] Natl Univ Singapore, Singapore, Singapore
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The immense parameter space of Large Language Models (LLMs) endows them with superior knowledge retention capabilities, allowing them to excel in a variety of natural language processing tasks. However, it also instigates difficulties in consistently tuning LLMs to incorporate the most recent knowledge, which may further lead LLMs to produce inaccurate and fabricated content. To alleviate this issue, we propose a knowledge metabolism framework for LLMs, which proactively sustains the credibility of knowledge through an auxiliary memory component and directly delivers pertinent knowledge for LLM inference, thereby suppressing hallucinations caused by obsolete internal knowledge during the LLM inference process. Benchmark experiments demonstrate DynaMind's effectiveness in overcoming this challenge. The code and demo of DynaMind are available at: https://github.com/Elfsong/DynaMind.
引用
收藏
页码:23784 / 23786
页数:3
相关论文
共 50 条
  • [31] Enhanced Story Comprehension for Large Language Models through Dynamic Document-Based Knowledge Graphs
    Andrus, Berkeley R.
    Nasiri, Yeganeh
    Cui, Shilong
    Cullen, Benjamin
    Fulda, Nancy
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10436 - 10444
  • [32] Dynamic Voting for Efficient Reasoning in Large Language Models
    Xue, Mingfeng
    Liu, Dayiheng
    Lei, Wenqiang
    Ren, Xingzhang
    Yang, Baosong
    Xie, Jun
    Zhang, Yidan
    Peng, Dezhong
    Lv, Jiancheng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 3085 - 3104
  • [33] Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models
    Xi, Yunjia
    Liu, Weiwen
    Lin, Jianghao
    Cai, Xiaoling
    Hong, Zhu
    Zhu, Jieming
    Chen, Bo
    Tang, Ruiming
    Zhang, Weinan
    Yu, Yong
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 12 - 22
  • [34] Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language Models
    Amayuelas, Alfonso
    Wong, Kyle
    Pang, Liangming
    Chen, Wenhu
    Wang, William
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 6416 - 6432
  • [35] Hybrid dynamic/static method for large-scale simulation of metabolism
    Yugi, Katsuyuki
    Nakayama, Yoichi
    Kinoshita, Ayako
    Tomita, Masaru
    THEORETICAL BIOLOGY AND MEDICAL MODELLING, 2005, 2
  • [36] Large Language Models as Commonsense Knowledge for Large-Scale Task Planning
    Zhao, Zirui
    Lee, Wee Sun
    Hsu, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [37] From Large Language Models to Large Multimodal Models: A Literature Review
    Huang, Dawei
    Yan, Chuan
    Li, Qing
    Peng, Xiaojiang
    APPLIED SCIENCES-BASEL, 2024, 14 (12):
  • [38] Knowledge retrieval and diagnostics in cloud services with large language models
    Baghdasaryan, Ashot
    Bunarjyan, Tigran
    Poghosyan, Arnak
    Harutyunyan, Ashot
    El-Zein, Jad
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [39] Evaluating Large Language Models in Cybersecurity Knowledge with Cisco Certificates
    Keppler, Gustav
    Kunz, Jeremy
    Hagenmeyer, Veit
    Elbez, Ghada
    SECURE IT SYSTEMS, NORDSEC 2024, 2025, 15396 : 219 - 238
  • [40] Comparative Assessment of Otolaryngology Knowledge Among Large Language Models
    Merlino, Dante J.
    Brufau, Santiago R.
    Saieed, George
    Van Abel, Kathryn M.
    Price, Daniel L.
    Archibald, David J.
    Ator, Gregory A.
    Carlson, Matthew L.
    LARYNGOSCOPE, 2025, 135 (02): : 629 - 634