Context-faithful Prompting for Large Language Models

被引:0
|
作者
Zhou, Wenxuan [1 ]
Zhang, Sheng [2 ]
Poon, Hoifung [2 ]
Chen, Muhao [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
[2] Microsoft Res, Redmond, WA USA
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts.
引用
收藏
页码:14544 / 14556
页数:13
相关论文
共 50 条
  • [41] LLMR: Real-time Prompting of Interactive Worlds using Large Language Models
    De la Torre, Fernanda
    Fang, Cathy Mengying
    Huang, Han
    Banburski-Fahey, Andrzej
    Fernandez, Judith Amores
    Lanier, Jaron
    PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS (CHI 2024), 2024,
  • [42] Prompting large language models for user simulation in task-oriented dialogue systems
    Algherairy, Atheer
    Ahmed, Moataz
    COMPUTER SPEECH AND LANGUAGE, 2025, 89
  • [43] Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy Construction
    Chen, Boqi
    Yi, Fandi
    Varro, Daniel
    2023 ACM/IEEE INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION, MODELS-C, 2023, : 588 - 596
  • [44] Distractor Generation for Multiple-Choice Questions with Predictive Prompting and Large Language Models
    Bitew, Semere Kiros
    Deleu, Johannes
    Develder, Chris
    Demeester, Thomas
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II, 2025, 2134 : 48 - 63
  • [45] Impact of Contradicting Subtle Emotion Cues on Large Language Models with Various Prompting Techniques
    Huda, Noor Ul
    Sahito, Sanam Fayaz
    Gilal, Abdul Rehman
    Abro, Ahsanullah
    Alshanqiti, Abdullah
    Alsughayyir, Aeshah
    Palli, Abdul Sattar
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (04) : 407 - 414
  • [46] PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents
    Sun, Simeng
    Liu, Yang
    Wang, Shuohang
    Iter, Dan
    Zhu, Chenguang
    Iyyer, Mohit
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 469 - 486
  • [47] FLTRNN: Faithful Long-Horizon Task Planning for Robotics with Large Language Models
    Song, Wei (songweizju@163.com), 1600, Institute of Electrical and Electronics Engineers Inc.
  • [48] Meta-in-context learning in large language models
    Coda-Forno, Julian
    Binz, Marcel
    Akata, Zeynep
    Botvinick, Matthew
    Wang, Jane X.
    Schulz, Eric
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [49] A Concise Review of Long Context in Large Language Models
    Huang, Haitao
    Liang, Zijing
    Fang, Zirui
    Wang, Zhiyuan
    Chen, Mingxiu
    Hong, Yifan
    Liu, Ke
    Shang, Penghui
    PROCEEDINGS OF INTERNATIONAL CONFERENCE ON ALGORITHMS, SOFTWARE ENGINEERING, AND NETWORK SECURITY, ASENS 2024, 2024, : 563 - 566
  • [50] FLTRNN: Faithful Long-Horizon Task Planning for Robotics with Large Language Models
    Zhang, Jiatao
    Tang, Lanling
    Song, Yufan
    Menge, Qiwei
    Qian, Haofu
    Shao, Jun
    Song, Wei
    Zhu, Shiqiang
    Gu, Jason
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 6680 - 6686