Exploring In-Context Learning for Knowledge Grounded Dialog Generation

被引:0
|
作者
Chen, Qinyu [1 ]
Wu, Wenhao
Li, Sujian
机构
[1] Peking Univ, Sch Comp Sci, Beijing, Peoples R China
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023) | 2023年
基金
国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large neural-based dialog generation models have been applied in many real-life scenarios, yet they are prone to hallucination and tend to produce factually inaccurate outputs which raise great concerns. To alleviate this problem, we propose a plug-and-play retrieval-based framework IKA, which leverages in-context learning and retrieval techniques to enhance LLMs on knowledge grounded dialog generation. We design thorough experiments on a large-scale knowledge graph with 1M+ facts (Moon et al., 2019) to investigate the effectiveness and generalization of our framework. Experiments show that our method surpasses previous training-based SOTA by a large margin, specifically 46.67% in BLEU4, 26.01% in ROUGE-L, 122.90% in BARTScore and 30.50% in Entity Coverage F1. Further analysis shows promising abilities of LLMs to perform knowledge-intensive tasks, which is previously considered weak and understudied.
引用
收藏
页码:10071 / 10081
页数:11
相关论文
共 50 条
  • [1] In-Context In-Context Learning with Transformer Neural Processes
    Ashman, Matthew
    Diaconu, Cristiana
    Weller, Adrian
    Turner, Richard E.
    SYMPOSIUM ON ADVANCES IN APPROXIMATE BAYESIAN INFERENCE, 2024, 253 : 1 - 29
  • [2] Exploring Effective Factors for Improving Visual In-Context Learning
    Sun, Yanpeng
    Chen, Qiang
    Wang, Jian
    Wang, Jingdong
    Li, Zechao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 2147 - 2160
  • [3] Can We Edit Factual Knowledge by In-Context Learning?
    Zheng, Ce
    Li, Lei
    Dong, Qingxiu
    Fan, Yuxuan
    Wu, Zhiyong
    Xu, Jingjing
    Chang, Baobao
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4862 - 4876
  • [4] Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning
    Lee, Dong-Ho
    Ahrabian, Kian
    Jin, Woojeong
    Morstatter, Fred
    Pujara, Jay
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 544 - 557
  • [5] KINet: Incorporating Relevant Facts Into Knowledge-Grounded Dialog Generation
    Bai, Jiaqi
    Yang, Ze
    Yang, Jian
    Guo, Hongcheng
    Li, Zhoujun
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 1213 - 1222
  • [6] The Learnability of In-Context Learning
    Wies, Noam
    Levine, Yoav
    Shashua, Amnon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] A glance at in-context learning
    Wu, Yongliang
    Yang, Xu
    FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (05)
  • [8] Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection
    Bai, Yu
    Chen, Fan
    Wang, Huan
    Xiong, Caiming
    Mei, Song
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning
    Pan, Jane
    Gao, Tianyu
    Chen, Howard
    Chen, Danqi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8298 - 8319
  • [10] Context Diffusion: In-Context Aware Image Generation
    Najdenkoska, Ivona
    Sinha, Animesh
    Dubey, Abhimanyu
    Mahajan, Dhruv
    Ramanathan, Vignesh
    Radenovic, Filip
    COMPUTER VISION - ECCV 2024, PT LXXVII, 2024, 15135 : 375 - 391