Context-faithful Prompting for Large Language Models

被引:0
|
作者
Zhou, Wenxuan [1 ]
Zhang, Sheng [2 ]
Poon, Hoifung [2 ]
Chen, Muhao [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
[2] Microsoft Res, Redmond, WA USA
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts.
引用
收藏
页码:14544 / 14556
页数:13
相关论文
共 50 条
  • [31] Who Wrote it and Why? Prompting Large-Language Models for Authorship Verification
    Hung, Chia-Yu
    Hu, Zhiqiang
    Hu, Yujia
    Lee, Roy Ka-Wei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 14078 - 14084
  • [32] Legal Syllogism Prompting: Teaching Large Language Models for Legal Judgment Prediction
    Jiang, Cong
    Yang, Xiaolei
    PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND LAW, ICAIL 2023, 2023, : 417 - 421
  • [33] INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair
    Wang, Hanbin
    Liu, Zhenghao
    Wang, Shuo
    Cui, Ganqu
    Ding, Ning
    Liu, Zhiyuan
    Yu, Ge
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 2081 - 2107
  • [34] Veracity-Oriented Context-Aware Large Language Models-Based Prompting Optimization for Fake News Detection
    Jin, Weiqiang
    Gao, Yang
    Tao, Tao
    Wang, Xiujun
    Wang, Ningwei
    Wu, Baohai
    Zhao, Biao
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2025, 2025 (01)
  • [35] MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models
    Wen, Yilin
    Wang, Zifeng
    Sun, Jimeng
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 10370 - 10388
  • [36] Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting
    Jha, Susmit
    Jha, Sumit Kumar
    Lincoln, Patrick
    Bastian, Nathaniel D.
    Velasquez, Alvaro
    Neema, Sandeep
    2023 IEEE INTERNATIONAL CONFERENCE ON ASSURED AUTONOMY, ICAA, 2023, : 149 - 152
  • [37] Prompting Language Models for Linguistic Structure
    Blevins, Terra
    Gonen, Hila
    Zettlemoyer, Luke
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 6649 - 6663
  • [38] Prompting large language model with context and pre-answer for knowledge-based VQA
    Hu, Zhongjian
    Yang, Peng
    Jiang, Yuanshuang
    Bai, Zijian
    PATTERN RECOGNITION, 2024, 151
  • [39] R3 Prompting: Review, Rephrase and Resolve for Chain-of-Thought Reasoning in Large Language Models under Noisy Context
    Tian, Qingyuan
    Zhu, Hanlun
    Wang, Lei
    Li, Yang
    Lan, Yunshi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 1670 - 1685
  • [40] Chain-of-event prompting for multi-document summarization by large language models
    Bao, Songlin
    Li, Tiantian
    Cao, Bin
    INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2024, 20 (03) : 229 - 247