Context-faithful Prompting for Large Language Models

被引:0
|
作者
Zhou, Wenxuan [1 ]
Zhang, Sheng [2 ]
Poon, Hoifung [2 ]
Chen, Muhao [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
[2] Microsoft Res, Redmond, WA USA
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts.
引用
收藏
页码:14544 / 14556
页数:13
相关论文
共 50 条
  • [21] Do Language Models Enjoy Their Own Stories? Prompting Large Language Models for Automatic Story Evaluation
    Chhun, Cyril
    Suchanek, Fabian M.
    Clavel, Chloe
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 1122 - 1142
  • [22] Does Metacognitive Prompting Improve Causal Inference in Large Language Models?
    Ohtani, Ryusei
    Sakurai, Yuko
    Oyama, Satoshi
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 458 - 459
  • [23] Emotional prompting amplifies disinformation generation in AI large language models
    Vinay, Rasita
    Spitale, Giovanni
    Biller-Andorno, Nikola
    Germani, Federico
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2025, 8
  • [24] On Hardware Security Bug Code Fixes by Prompting Large Language Models
    Ahmad, Baleegh
    Thakur, Shailja
    Tan, Benjamin
    Karri, Ramesh
    Pearce, Hammond
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4043 - 4057
  • [25] A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models
    Song, Yuan-Feng
    He, Yuan-Qin
    Zhao, Xue-Fang
    Gu, Han-Lin
    Jiang, Di
    Yang, Hai-Jun
    Fan, Li-Xin
    Journal of Computer Science and Technology, 2024, 39 (04) : 984 - 1004
  • [26] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
    Wei, Jason
    Wang, Xuezhi
    Schuurmans, Dale
    Bosma, Maarten
    Ichter, Brian
    Xia, Fei
    Chi, Ed H.
    Le, Quoc V.
    Zhou, Denny
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [27] MEEP: Is this Engaging? Prompting Large Language Models for Dialogue Evaluation in Multilingual Settings
    Ferron, Amila
    Shore, Amber
    Mitra, Ekata
    Agrawal, Ameeta
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 2078 - 2100
  • [28] Fairness-guided Few-shot Prompting for Large Language Models
    Ma, Huan
    Zhang, Changqing
    Bian, Yatao
    Liu, Lemao
    Zhang, Zhirui
    Zhao, Peilin
    Zhang, Shu
    Fu, Huazhu
    Hu, Qinghua
    Wu, Bingzhe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [29] Instructing and Prompting Large Language Models for Explainable Cross-domain Recommendations
    Petruzzelli, Alessandro
    Musto, Cataldo
    Laraspata, Lucrezia
    Rinaldi, Ivan
    de Gemmis, Marco
    Lops, Pasquale
    Semeraro, Giovanni
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 298 - 308
  • [30] CoLE: A collaborative legal expert prompting framework for large language models in law
    Li, Bo
    Fan, Shuang
    Zhu, Shaolin
    Wen, Lijie
    KNOWLEDGE-BASED SYSTEMS, 2025, 311