Context-faithful Prompting for Large Language Models

被引:0
|
作者
Zhou, Wenxuan [1 ]
Zhang, Sheng [2 ]
Poon, Hoifung [2 ]
Chen, Muhao [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
[2] Microsoft Res, Redmond, WA USA
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts.
引用
收藏
页码:14544 / 14556
页数:13
相关论文
共 50 条
  • [1] Considerations for Prompting Large Language Models
    Schulte, Brian
    JAMA ONCOLOGY, 2024, 10 (04) : 475 - 483
  • [2] Prompting Is Programming: A Query Language for Large Language Models
    Beurer-Kellner, Luca
    Fischer, Marc
    Vechev, Martin
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2023, 7 (PLDI):
  • [3] Graph Neural Prompting with Large Language Models
    Tian, Yijun
    Song, Huan
    Wang, Zichen
    Wang, Haozhu
    Hu, Ziqing
    Wang, Fang
    Chawla, Nitesh V.
    Xu, Panpan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19080 - 19088
  • [4] Prompting Large Language Models With the Socratic Method
    Chang, Edward Y.
    2023 IEEE 13TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE, CCWC, 2023, : 351 - 360
  • [5] PROMPTING LARGE LANGUAGE MODELS WITH SPEECH RECOGNITION ABILITIES
    Fathullah, Yassir
    Wu, Chunyang
    Lakomkin, Egor
    Jia, Junteng
    Shangguan, Yuan
    Li, Ke
    Guo, Jinxi
    Xiong, Wenhan
    Mahadeokar, Jay
    Kalinli, Ozlem
    Fuegen, Christian
    Seltzer, Mike
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024), 2024, : 13351 - 13355
  • [6] Considerations for Prompting Large Language Models-Reply
    Chen, Shan
    Savova, Guergana K.
    Bitterman, Danielle S.
    JAMA ONCOLOGY, 2024, 10 (04) : 538 - 539
  • [7] Prompting is not a substitute for probability measurements in large language models
    Hu, Jennifer
    Levy, Roger
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 5040 - 5060
  • [8] Prompting Large Language Models to Power Educational Chatbots
    Farah, Juan Carlos
    Ingram, Sandy
    Spaenlehauer, Basile
    Lasne, Fanny Kim-Lan
    Gillet, Denis
    ADVANCES IN WEB-BASED LEARNING, ICWL 2023, 2023, 14409 : 169 - 188
  • [9] DDPrompt: Differential Diversity Prompting in Large Language Models
    Mu, Lin
    Zhang, Wenhan
    Zhang, Yiwen
    Jin, Peiquan
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 168 - 174
  • [10] Editing Graph Visualizations by Prompting Large Language Models
    Argyriou, Evmorfia
    Boehm, Jens
    Eberle, Anne
    Gonser, Julius
    Lumpp, Anna-Lena
    Niedermann, Benjamin
    Schwarzkopf, Fabian
    GRAPH DRAWING AND NETWORK VISUALIZATION, GD 2023, PT II, 2023, 14466 : 253 - 254