From Static to Dynamic: Knowledge Metabolism for Large Language Models

被引:0
作者
Du, Mingzhe [1 ,2 ]
Luu, Anh Tuan [1 ]
Ji, Bin [2 ]
Ng, See-Kiong [2 ]
机构
[1] Nanyang Technol Univ, Singapore, Singapore
[2] Natl Univ Singapore, Singapore, Singapore
来源
THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21 | 2024年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The immense parameter space of Large Language Models (LLMs) endows them with superior knowledge retention capabilities, allowing them to excel in a variety of natural language processing tasks. However, it also instigates difficulties in consistently tuning LLMs to incorporate the most recent knowledge, which may further lead LLMs to produce inaccurate and fabricated content. To alleviate this issue, we propose a knowledge metabolism framework for LLMs, which proactively sustains the credibility of knowledge through an auxiliary memory component and directly delivers pertinent knowledge for LLM inference, thereby suppressing hallucinations caused by obsolete internal knowledge during the LLM inference process. Benchmark experiments demonstrate DynaMind's effectiveness in overcoming this challenge. The code and demo of DynaMind are available at: https://github.com/Elfsong/DynaMind.
引用
收藏
页码:23784 / 23786
页数:3
相关论文
共 50 条
[31]   Benchmarking Biomedical Relation Knowledge in Large Language Models [J].
Zhang, Fenghui ;
Yang, Kuo ;
Zhao, Chenqian ;
Li, Haixu ;
Dong, Xin ;
Tian, Haoyu ;
Zhou, Xuezhong .
BIOINFORMATICS RESEARCH AND APPLICATIONS, PT II, ISBRA 2024, 2024, 14955 :482-495
[32]   Large Language Models and Data Quality for Knowledge Graphs [J].
Marchesin, Stefano ;
Silvello, Gianmaria ;
Alonso, Omar .
Information Processing and Management, 2025, 62 (06)
[33]   zkLLM: Zero Knowledge Proofs for Large Language Models [J].
Sun, Haochen ;
Li, Jason ;
Zhang, Hongyang .
PROCEEDINGS OF THE 2024 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2024, 2024, :4405-4419
[34]   Knowledge Graphs and Their Reciprocal Relationship with Large Language Models [J].
Dehal, Ramandeep Singh ;
Sharma, Mehak ;
Rajabi, Enayat .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2025, 7 (02)
[35]   Knowledge of cultural moral norms in large language models [J].
Ramezani, Aida ;
Xu, Yang .
PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, :428-446
[36]   Systematic Assessment of Factual Knowledge in Large Language Models [J].
Luo, Linhao ;
Thuy-Trang Vu ;
Phung, Dinh ;
Haffari, Gholamreza .
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, :13272-13286
[37]   ALCUNA: Large Language Models Meet New Knowledge [J].
Yin, Xunjian ;
Huang, Baizhou ;
Wan, Xiaojun .
2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, :1397-1414
[38]   Large Language Models for Knowledge Graph Embedding: A Survey [J].
Liu, Bingchen ;
Fang, Yuanyuan ;
Xu, Naixing ;
Hou, Shihao ;
Li, Xin ;
Li, Qian .
MATHEMATICS, 2025, 13 (14)
[39]   A review on synergizing knowledge graphs and large language models [J].
Yang, Zhenyao ;
Yuan, Sha ;
Shao, Zhou ;
Li, Wenfa ;
Liu, Runzhou .
COMPUTING, 2025, 107 (06)
[40]   Poisoning medical knowledge using large language models [J].
Yang, Junwei ;
Xu, Hanwen ;
Mirzoyan, Srbuhi ;
Chen, Tong ;
Liu, Zixuan ;
Liu, Zequn ;
Ju, Wei ;
Liu, Luchen ;
Xiao, Zhiping ;
Zhang, Ming ;
Wang, Sheng .
NATURE MACHINE INTELLIGENCE, 2024, 6 (10) :1156-1168