From Static to Dynamic: Knowledge Metabolism for Large Language Models

被引:0
作者
Du, Mingzhe [1 ,2 ]
Luu, Anh Tuan [1 ]
Ji, Bin [2 ]
Ng, See-Kiong [2 ]
机构
[1] Nanyang Technol Univ, Singapore, Singapore
[2] Natl Univ Singapore, Singapore, Singapore
来源
THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21 | 2024年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The immense parameter space of Large Language Models (LLMs) endows them with superior knowledge retention capabilities, allowing them to excel in a variety of natural language processing tasks. However, it also instigates difficulties in consistently tuning LLMs to incorporate the most recent knowledge, which may further lead LLMs to produce inaccurate and fabricated content. To alleviate this issue, we propose a knowledge metabolism framework for LLMs, which proactively sustains the credibility of knowledge through an auxiliary memory component and directly delivers pertinent knowledge for LLM inference, thereby suppressing hallucinations caused by obsolete internal knowledge during the LLM inference process. Benchmark experiments demonstrate DynaMind's effectiveness in overcoming this challenge. The code and demo of DynaMind are available at: https://github.com/Elfsong/DynaMind.
引用
收藏
页码:23784 / 23786
页数:3
相关论文
共 50 条
  • [21] Assisting Static Analysis with Large Language Models: A ChatGPT Experiment
    Li, Haonan
    Hao, Yu
    Zhai, Yizhuo
    Qian, Zhiyun
    [J]. PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023, 2023, : 2107 - 2111
  • [22] Benchmarking Biomedical Relation Knowledge in Large Language Models
    Zhang, Fenghui
    Yang, Kuo
    Zhao, Chenqian
    Li, Haixu
    Dong, Xin
    Tian, Haoyu
    Zhou, Xuezhong
    [J]. BIOINFORMATICS RESEARCH AND APPLICATIONS, PT II, ISBRA 2024, 2024, 14955 : 482 - 495
  • [23] Updating knowledge in Large Language Models: an Empirical Evaluation
    Marinelli, Alberto Roberto
    Carta, Antonio
    Passaro, Lucia C.
    [J]. IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS 2024, IEEE EAIS 2024, 2024, : 289 - 296
  • [24] Knowledge of cultural moral norms in large language models
    Ramezani, Aida
    Xu, Yang
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 428 - 446
  • [25] Systematic Assessment of Factual Knowledge in Large Language Models
    Luo, Linhao
    Thuy-Trang Vu
    Phung, Dinh
    Haffari, Gholamreza
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13272 - 13286
  • [26] ALCUNA: Large Language Models Meet New Knowledge
    Yin, Xunjian
    Huang, Baizhou
    Wan, Xiaojun
    [J]. 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 1397 - 1414
  • [27] Poisoning medical knowledge using large language models
    Yang, Junwei
    Xu, Hanwen
    Mirzoyan, Srbuhi
    Chen, Tong
    Liu, Zixuan
    Liu, Zequn
    Ju, Wei
    Liu, Luchen
    Xiao, Zhiping
    Zhang, Ming
    Wang, Sheng
    [J]. NATURE MACHINE INTELLIGENCE, 2024, 6 (10) : 1156 - 1168
  • [28] SKILL: Structured Knowledge Infusion for Large Language Models
    Moiseev, Fedor
    Dong, Zhe
    Alfonseca, Enrique
    Jaggi, Martin
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1581 - 1588
  • [29] Knowledge Graph Treatments for Hallucinating Large Language Models
    Collarana, Diego
    Busch, Moritz
    Lange, Christoph
    [J]. ERCIM NEWS, 2024, (136): : 35 - 36
  • [30] Detoxifying Large Language Models via Knowledge Editing
    Wang, Mengru
    Zhang, Ningyu
    Xu, Ziwen
    Xi, Zekun
    Deng, Shumin
    Yao, Yunzhi
    Zhang, Qishen
    Yang, Linyi
    Wang, Jindong
    Chen, Huajun
    [J]. PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 3093 - 3118